Let the Windows 10 Start menu index the actual start menu for a change

I remember when I booted Windows Vista for the first time, there where three things that stood out at the time:

  1. Actual working 64 bit support, which was useful for virtualization (yes, i know about XP x64. And no, it does not count).
  2. The new color scheme. lets face it, it was quite refreshing after years of blue and green (or gray for the classic users).
  3. The start menu search bar. At long last i could drop my 20+ layer start menu folder structure, just type the program you need and it will appear.

Over time I realized how much the start menu search bar improved the experience of working with windows vista as a day to day OS. At Microsoft HQ they must have also realized this, because the start menu search bar still existed in Windows 7.

Then windows 10 came out (I skipped the windows 8 and 8.1 disaster). Unfortunately Microsoft figured it was a good idea to use the start menu search to search the entire user profile including: documents, downloads, email, and much more. This meant it was now possible to search for a program, and have the installer for this program listed before the program itself (and this happens quite often). It also introduced some weird behavior where the first three letters of a program will give a hit, but the first four letters wont give the same hit. And in a lot of cases programs would not show up at all.

Fortunately we can fix most of the problems by limiting the indexing for the start search bar to only the start menu.

But first we need to make sure all hidden files and folders are viable in windows Explorer.

In a windows Explorer windows click the “view” tab -> “options” -> “view” tab. You should now see the window from image1 (below)

image1

Make sure the “Hide protected operating system files (Recommended)” is NOT checked, and “Show hidden files, folders and drives” is selected.

Next up, open the indexing options by typing “indexing options” in the start menu search bar (and hope it works) image2 should now appear.

image2

As you can see, there are a lot of non start menu related folders, and the entire users folder is in the index. To change this, click the “Modify” button.

image3

Now deselect all unwanted folders (if you click on a folder/item in the bottom half of the screen (image3) it will auto navigate to the fonder in the top half) and use the top part of the windows (image3) to browse to the 2 start locations.

One is located in “C:\ProgramData\Microsoft\Windows\Start Menu”, the other one is located in “C:\Users\<yourUserName>\AppData\Roaming\Microsoft\Windows\Start Menu”.

Check the boxes before both the start menu locations and click “OK”.

image4

Next click on “Advanced” -> “Rebuild” -> “OK” (image4).

image5

As you can see, there are only two folders left in the index overview, and the number of indexed items has dropped to a more reasonable amount.

 

The start menu search bar will now function almost like in Windows vista and 7. And it will find the program your looking for 🙂

 

Ubuntu 18.04 LTS kiosk for web or RDP

In this article we will show how to setup a browser or RDP kiosk, based on Ubuntu server 18.04 LTS. Because we are using the server edition the kiosk system will have a very small footprint and way less packets installed when we compare it to a stripped down desktop kiosk.

First we need to install a fresh Ubuntu 18.04 LTS system. After the installation is complete, make sure to install any available updates by executing the following commands:

sudo apt-get update
sudo apt-get upgrade -y
sudo reboot

Kiosk purpose

Before we start configuring the kiosk, we first need to think what we want to show on the kiosk unit. Different content types require different applications and configurations. in this article we will show how to install and use a chrome based kiosk and a Remmina (RDP) based kiosk. Other applications/types of content are also possible but might require additional research.

The kiosk can only serve one type of content at the time.

To go with the web content kiosk, install chrome using the following command:

sudo apt-get install --no-install-recommends chromium-browser

To go with the remote desktop kiosk, install Remmina using the following command:

sudo apt-get install remmina -y

Automatic logon

Our kiosk needs to log on automatically, to achieve this we will overwrite the getty service configuration to change the way TTY1 is started (TTY1 is the default console output on most Linux based servers). To overwrite a part of the getty configuration we need to execute the following command:

sudo systemctl edit getty@tty1

The above command will create an overwrite file for the getty configuration. paste the following text in this file to enable automatic login (replace <username> with a local account on the Ubuntu server):

[Service]
ExecStart=
ExecStart=-/sbin/agetty -a <username> --noclear %I $TERM

Because the user configured in the above file will automatically logon to the system at boot, it is recommended to make sure the user can not access sensitive documents (which should not be stored on a kiosk system in any case). the selected user may be in the sudoers file, this will not present a security problem because sudo alway’s requires a password. For more information about auto logon: Muru, a user on askubuntu.com has written a very interesting answer about auto logon.

Graphical user interface

Ubuntu server has no X server and Window manager installed by default, its text only. Our kiosk needs a both a X server and a window manger to display any graphical applications.

Use the following command to install the X server (X.org) and windows manager (Openbox):

sudo apt-get install --no-install-recommends xserver-xorg x11-xserver-utils xinit openbox -y

with the GUI installed, we need to configure it. X.org will work out of the box, but openbox requires some minor configuration changes to start our kiosk application.

The Openbox configuration file is located at “/etc/xdg/openbox/autostart”. We can open it for editing with the following command:

sudo nano /etc/xdg/openbox/autostart

Replace the contents of the Openbox configuration file with one of the below ones, depending on witch packet you installed earlier.

Web content/Chrome (replace <http://your-url-here> with your desired url:

# Disable any form of screen saver / screen blanking / power management
xset s off
xset s noblank
xset -dpms

# Allow quitting the X server with CTRL-ATL-Backspace
setxkbmap -option terminate:ctrl_alt_bksp

# Start Chromium in kiosk mode
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' ~/.config/chromium/'Local State'
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/; s/"exit_type":"[^"]\+"/"exit_type":"Normal"/' ~/.config/chromium/Default/Preferences
chromium-browser --disable-infobars --kiosk '<http://your-url-here>'

Remote Desktop/Remmina (change <path to remmina file> to the actual path to the desired remmina file):

# Disable any form of screen saver / screen blanking / power management
xset s off
xset s noblank
xset -dpms

# Allow quitting the X server with CTRL-ATL-Backspace
setxkbmap -option terminate:ctrl_alt_bksp

# Start Remmina in kiosk mode
remmina -c <path to remmina file>

#start remmina in normal mode (for configuration purposes)
#remmina

If you don’t have a .remmina file, comment out row 10 and uncomment row 13, now when the xserver is started Remmina will start in normal mode.

The main part of the configuration is done. To start the GUI simply type “startx” in the console, the GUI can be closed by pressing ctr+alt+backspace.

Autostart Graphical user interface

We can not call a system a kiosk if we need to manually type “startx” after every reboot. So lets make sure we wont have to.

We can accomplish this by configuring the “.bash_profile” file for the auto login user. Assuming we are alredy logged on as the auto logon user, use the following commands to create or open the “.bash_profile” file for editing:

cd
sudo nano .bash_profile

now append the following line to the file:

[[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && startx

 

Reboot and enjoy your kiosk 🙂

 

A large part of this article is based on THIS blog-post about a similar setup on a raspberry pi.

Network configuration in Ubuntu Server 18.04

As you might have heard, Ubuntu (server) 18.04 uses a new network configuration system named Netplan.

Well, Netplan is not entirely new, it was introduced in Ubuntu 17:10 but I did not pay to much attention to it at the time because I prefer the Ubuntu LTS editions.

The first big change you will notice is the location the where the network configuration is stored. If you grab the default image form the Ubuntu website, there will be a file named “50-cloud-init.yaml” which is located in “/etc/netplan/”. The funny part is, this is not the only place you can store network configuration files. A blog post on the Ubuntu blog shows there are 3 locations available for storing network configurations (in order of importance):

  • /run/netplan/*.yaml
  • /etc/netplan/*.yaml
  • /lib/netplan/*.yaml

You can place any number of .yaml files in each of the above directory’s, where the lowest alphabetical filename will be the leading one (A*.yaml before B*.yaml). If a filename is used in multiple directory’s, the one in the most important directory will be the leading one.

Did I mention the files are YAML? Great lets use an indentation dependent format, what could possibly go wrong when using nano or VI without formatting and validation tools.

Needles to say, this might become a bit confusing when you need to troubleshoot a system you did not configure or maintain.

But enough of my ranting, lets she how we can configure a static IP address.

Luckily Canonical did set up a webpage to provide information about Netplan (https://netplan.io) this page also contains a neat set of Netplan examples.

The main page also explains that the default configuration location should be in “/etc/netplan/” there are no guidelines about the names for the configuration files, so you can “go ham” on those.

The configuration itself is kind of straight forward here is a basic IPv4 configuration with 2 name servers:

network:
  ethernets:
    eth0:
      addresses:
        - 192.168.0.2/24
      gateway4: 192.168.0.1
      nameservers:
          addresses: [192.168.0.1,8.8.8.8]

If you want IPv4 DHCP instead see below (for IPv6 set “dhcp6” to true and “dhcp4” to false):

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
      dhcp6: false

Don’t forget to apply the configuration when your done

sudo netplan apply

If you need a more advanced configuration like more addresses on one interface or VLANs, check out the comprehensive example page on the Netplan website (https://netplan.io/examples).

They did a good job showing of a lot of differed possibility’s 🙂

 

Show online Teamspeak users without the client

It might sound like a useless project, but I wanted to be able to check who was using my Teamspeak server without having to be logged on all the time.

After a short search over the world wide web, I came across the “Powerful PHP Framework” from planet teamspreak. This framework enables the user to access the Teamspeak query system and do all kinds of fun stuff. The best part is, it is publicly available at Github and it is well documented.

during my search I already found a piece of code wich did almost exactly what I wanted to do, after some alterations I had the following code:

<span>
<?php
require_once("TeamSpeak3Lib/TeamSpeak3.php");

$enter = "<br>";
if (isset($_GET['TS']))
{
  $enter = "\13\10";
}

$ts3_VirtualServer = TeamSpeak3::factory("serverquery://<queryUser>:<queryPassword>@<TeamspeakIP>:<queryPort>/?server_port=<serverPort>");

$status = "offline";
$count = 0;
$max = 0;
 
try {
    $status = $ts3_VirtualServer->getProperty("virtualserver_status");
    $count = $ts3_VirtualServer->getProperty("virtualserver_clientsonline") - $ts3_VirtualServer->getProperty("virtualserver_queryclientsonline");
    $max = $ts3_VirtualServer->getProperty("virtualserver_maxclients");
}

catch (Exception $e) {
    echo "QueryError: " . $e->getCode() . ' ' . $e->getMessage() . "</div>";
}

echo "TS3 Server Status: " . $status . $enter . "Clients online: " . $count . "/" . $max ; 

echo $enter.$enter;
echo "Online clients: " . $enter;

// query clientlist from virtual server and filter by platform
$arr_ClientList = $ts3_VirtualServer->clientList();
// walk through list of clients
foreach($arr_ClientList as $ts3_Client)
{
  if ($ts3_Client["client_platform"] != "ServerQuery")
  {
    echo $ts3_Client . $enter;
  }
}
?>
</span>

The above code produces the following output in a browser:

TS3 Server Status: online
Clients online: 1/32

Online clients:
projects-42

This did not solve my problem entirely, I wanted to show the output on my desktop and opening a webbrowser and hitting F5 every few minutes is hardly ideal. Rainmeter to the rescue!

The following Rainmeter skin will show the above output in a very basic format, but it is perfect for me 🙂

[Rainmeter]
Update=1000
AccurateText=1
DynamicWindowSize=1
BackgroundMode=2
SolidColor=0,0,0,70
AccurateText=1

[MeasureSite]
Measure=WebParser
URL=http://<url/to/your/php/page>.php?TS
RegExp=(?siU)<span>(.*)</span>
UpdateRate=30

[MeasureInnerSite]
Measure=WebParser
URL=[MeasureSite]
StringIndex=1

[MeterShowSite]
Meter=String
MeasureName=MeasureInnerSite
Text=%1 
X=5
FontSize=11
FontColor=255,255,255,255

Deploy a Zabbix server to monitor your infrastructure

If you maintain a ICT infrastructure you probably use (or your looking to use) a monitoring solution to monitor your ICT infrastructure. Detecting, and fixing, problems before end users start experiencing them is something most ICT professionals love to accomplish, preferably every time a problem occurs.

Zabbix is an open-source monitoring solution for servers, network devices and (web) applications. In this article we will setup a Zabbix server and install the Zabbix client on a Linux and a Windows server.

Setting up the Zabbix Server

We will be using a Ubuntu 16.04 server for the Zabbix server (i know 18.04 is already out there but because of some hypervisor related problems i am not yet able to install 18.04).

Before we start installing Zabbix, we need to install a number of prerequisites. to installe these type in (or copy) the following commands:

sudo apt-get update
sudo apt-get install apache2 libapache2-mod-php7.0 php7.0 php7.0-xml php7.0-bcmath php7.0-mbstring mysql-server -y

Now we need to add the Zabbix repository. This will ensure we will receive updates for Zabbix when the are released.

Zabbix uses an installer packet to add the repository to the system. Use the following commands to download and install the Zabbix repository:

wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
sudo dpkg -i zabbix-release_3.2-1+xenial_all.deb

With that out of the way we can install Zabbix server and agent.

sudo apt-get update
sudo apt-get install zabbix-server-mysql zabbix-frontend-php zabbix-agent -y

Because Zabbix wont create its own database we will need to take care of it, use the following commands to log in to the mysql server and create the database (change <zabbix_user> and <zabbix_password> to a username and password):

mysql -u root -p
CREATE DATABASE zabbix_db character set utf8 collate utf8_bin;
GRANT ALL PRIVILEGES on zabbix_db.* to <zabbix_user>@localhost identified by '<zabbix_password>';
FLUSH PRIVILEGES;
exit;

cd /usr/share/doc/zabbix-server-mysql/
sudo zcat create.sql.gz | mysql -u <zabbix_user> -p zabbix_db

Now we have to point the Zabbix server to the newly created database. First we need to open the configuration file:

sudo nano /etc/zabbix/zabbix_server.conf

Now we have to find the DB configuration and configure it as follows:

DBName=zabbix_db
DBUser=<zabbix_user>
DBPassword=<zabbix_password>

Next up is the timezone. we have to change this in two files, the Zabbix config and the php config. you can find your timezone in this list: http://php.net/manual/en/timezones.php

Use the following commands to edit the Zabbix config:

sudo nano /etc/zabbix/apache.conf

# fint the line:
# php_value date.timezone Europe/Riga

# Replace it with 
php_value date.timezone <your_timezone>

Now for the php config:

sudo nano /etc/php/7.0/apache2/php.ini

# fint the line:
;date.timezone =

# replace it with
date.timezone = <your_timezone>

Now we can start Zabbix and make the service for the server and client start at boot (we also need to restart apache2):

sudo systemctl restart zabbix-agent
sudo systemctl restart apache2
sudo systemctl restart zabbix-server

sudo systemctl enable zabbix-server
sudo systemctl enable zabbix-agent

The zabbix server is now operational, so is the Zabbix agent. Now we have to configure the Zabbix web interface to be able to use the Server (and agent).

Navigate your browser to http://<the_ip_of_your_zabbix_server>/zabbix

You page should look like the following image:

Click “Next Step” to continue.

 

If you followed all staps above, the next page should be looking like this. All prerequisites should be met. Click “Next step” to continue. (I forgot to capture this page, so its looking a little different.  luckily Google has the solution to (almost) every problem).

 

Enter the database information you used while configuring the database and click “Next Step” to continue.

 

Now we can give a name to our Zabbix instance the hostname and port configuration can be left at the default stetting (unless you are already using this port for something else or you run the Zabbix server component from a different machine). Click “Next step” to continue.

 

The zabbix installation is now complete. Click “Finish” to navigate to the Zabbix login page.

The default login credentials are username: “Admin” and password: “zabbix”. Both username and password are case sensitive.

Installing the agent on a Linux host

Although we already installed the agent on the zabbix server during the setup process, the process of installing the agent  is explained here so it can be easily installed on other on Debian based Linux hosts.

First we need to add the repository (in my case I’m using xenial, if you are using another distro check this link for the available distro’s) :

wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
sudo dpkg -i zabbix-release_3.2-1+xenial_all.deb

Next up, install the agent:

sudo apt-get update
sudo apt-get install zabbix-agent -y

Now we need to tell the Zabbix agent where the server is located. To do this open the Zabbix agent configuration:

sudo nano /etc/zabbix/zabbix_agentd.conf

# find the line 
Server=

#change it to
Server=<the_ip_of_your_zabbix_server>


# find the line 
ServerActive=

#change it to
ServerActive=<the_ip_of_your_zabbix_server>


# find the line 
Hostname=

#change it to
Hostname=<the_hostname>

Make sure the service is started (with the new configuration) and it will start automatically after the next reboot:

sudo systemctl restart zabbix-agent
sudo systemctl enable zabbix-agent

Were all done here.

Installing the agent on a Windows host

Before we can install the agent, we first need to download it. The agent can be found at the following page “https://www.zabbix.com/download_agents“. Unlike the Linux agent, the Windows agent is not packaged into a handy installer. Luckily the Zabbix team made it easy to install the Agent service.

After downloading the Zip file from the above link, we have to extract the agent and place it somewhere convenient. I placed my agent files in “C:\ProgramData\zabbix_agent\” but any directory will do.

Next we have to edit the “zabbix_agentd.win.conf” file which is located in the “conf” directory:

# find the line
Server=

#change it to
Server=<the_ip_of_your_zabbix_server>


# find the line
ServerActive=

#change it to
ServerActive=<the_ip_of_your_zabbix_server>


# find the line
Hostname=

#change it to
Hostname=<the_hostname>

We are now ready to install the agent service. We can do this by calling the “zabbix_agentd.exe” with the “–install” and “— config” parameters.

The command to install the agent service is:

<location_of_your_zabbix_agent_C:\_path>\bin\<win64_or_win32>\zabbix_agentd.exe --config <location_of_your_zabbix_agent_C:\_path>\conf\zabbix_agentd.win.conf --install

To remove the agent use the following command:

<location_of_your_zabbix_agent_C:\_path>\bin\<win64_or_win32>\zabbix_agentd.exe --config <location_of_your_zabbix_agent_C:\_path>\conf\zabbix_agentd.win.conf --uninstall

If you open the “services” mmc snap-in you will see the “Zabbix Agent” service installed and ready to go.

Add a host to Zabbix

After installing the server and agent components we can finally start adding hosts to Zabbix.

To add a host, log in to the Zabbix webinterface, navigate to “Configuration”->”Hosts” and click the “Create host” button at the top right side of the screen.

Fill out the “Add host form” and put it in the desired group (select Windows servers if you are adding a Windows server, etc.). Click “add” to add the host to Zabbix.

Optionally you can add templates to the host (the default groups alredy have templates linked to them).

It can take some time (up to 50 min) to get the first redings in the Zabbix web-interface.

 

Congratulations, you now have a working Zabbix server with at least one host. Happy monitoring 🙂

 

MSSQL: keep your transaction log’s from exploding

I recently discovered that my employers database server was running out of storage space on the database volume. After some searching I traced the problem to our Xendesktop site transaction log file witch had grown almost 80GB in 3 months.

I never noticed problems with the transaction logs before but previously the SQL server was a physics machine with around 1 TB storage for the databases while the new database server is a VM with only 350GB available for the databases.

If you work with Microsoft SQL server from time to time, you might know that a database usually consists of two files. The database file “<dbname>.mdf” and the transaction log file “<dbname>.ldf”. The database file stores the current version of the database while the transaction log file saves all changes to the database. It is important to have a transaction log, because it enables you to rollback the database to a point in time, or recover the database after a software crash or an unexpected power failure.

There are 3 modes in with the transaction log can operate: Full, Bulk-logged and Simple. I wont be going into detail about the different recovery models, Microft has a good article about this subject.

In short, Full saves all transactions indefinitely while Simple removes them after committing them to the database. Bulk-logged is almost the same as Full but it has some enhancements for logging bulk transactions.

Our Citrix database was configured to use the Full recovery model, this is the default setting.

To solve this issue while retaining the ability to recover data in the event of a crash we will have to set up transaction log back-ups. If configured right this can be an automated process.

Follow the following steps to back-up a transaction log:

From the SQL Server Management Studio, click right on the DB of witch you want to backup the transaction log. Navigate to Tasks and click “Back Up…“.

 

In the Back Up Database dialog on the General tab, change the backup type to “Transaction log” and specify a destination for your backup.

 

In the Back Up Database dialog on the Media Options tab, check the “Verify backup when finished” checkbox if you want SQL server to verify the backup after it finishes.

 

In the Back Up Database dialog on the Backup Options tab, Set backup compression to “Compress backup” (unless you just want to move your transaction log in witch case you can live the setting in the default).

 

At this point you can simply click the OK button and the backup will start. In this case it will be a one time only backup. We ICT people like to automate things so we wont have to repeat actions over and over again. Fortunately, we can turn this process in a automated task.

 

to turn the above process in an automated task, click the downward arrow next to the script button at the top of the dialog and select “Script Action to Job” (or press “Ctrl+Shift+M”).

 

Unfortunately SQL server Express edition wont allow automated jobs so for the rest of the process i have used a much older SQL Server Standard Instance

 

Clicking on the “create job” button will open a new dialog named “New Job”. Most of the information on this page and the script itself are already added to the new job. Navigate to the “Schedules” tab to schedule the new job.

 

At the schedule tab we can create new schedules. in this case we want to backup the transaction log every Sunday at 1 AM. You can set multiple schedules for each job.

 

On the notifications tab we can set what type of notifications we want to receive from this Job. You can configure this to your liking. Click OK to Save the Job.

 

If everything went as planed, we now have an automated transaction log backup Job. Unfortunately the transaction log remains the same size as it was before. to release the backed up space we need to shrink the Transaction log.

WE DON’T WANT TO SHRINK THE TRANSACTION LOG TO OFTEN, GROWING THE LOG IS A SLOW PROCESS!!!

If your transaction log is huge and you want to reclaim some of the space, continue following these steps:

to be able to shrink the transaction log file, we first need to switch the database to simple recovery mode.

click right on the database you want to shrink and click on “Properties”.

In the property’s dialog, select the options tab and select “Simple” from the recovery model drop down list. Click “OK” to apply the change.

 

Right click the database you want to shrink, navigate to Tasks -> Shrink and click “Files”.

 

In the shrink File dialog, change the file type to “Log” and make sure the shrink action is set to “Release unused space”. Click “OK” tot start the shrink action.

Don’t forget to change the recovery model back to full, if it was set to full to begin with 😉

 

 

 

Citrix Xendesktop 7.17 part 3: publishing a Desktop

After setting up the delivery controller and configuring storefront to allow the HTML5 Receiver it is finally time to add a desktop to the mix.

In this step by step article we will be adding a single desktop in “remote PC access mode”. this means that XenDesktop wont power manage the desktop and we wont be able to mass deploy it (its not a golden image).

This might sound a little disappointing but its good for the basics, if your setting up a lab environment you probably want it this way because its way more flexible and when you have your image the way you want it, you can always re add it with a differed configuration.

Lets get started.

We start by installing the XenDesktop VDI agent on the VM. In my setup this is a Widows 10 VM but you can use all supported versions (7 to 10 at this time) and even some Linux distro’s!

After installing windows on a new VM we can start the XenDestkop/Xenapp installer. This is the same installation media used for setting up the delivery controller. Click the Start button after the XenDesktop row to start the installation.

 

Click the “Virtual Delivery Agent for Windows Desktop OS” button to start the installation process.

 

Just like I wrote earlier I’m installing in Remote PC mode. In this case i want only one VDI and i’m not interested in rolling out a new set of VDI’s after every change (lab enviroment, remember). So select “Enable Remote PC Access” and click Next to continue.

 

We might as well install the Receiver in the VDI, if we want to use Xenapp applications we can do it from inside the VDI (saves a lot of mouse work). Click Next to continue.

 

I wont be using any of the additional components in this particular VDI. If you think you need any (supportability tools and profile manager are very useful if your planning to deploy the VDI to multiple users), select them and click Next to continue.

 

Now we need to add specify the deliver controller. you can do this manually or using AD. I only have one delivery controller and i don’t mind typing the name. Type the FQDN for the delivery controller, use the “Test connection..” button to verify there is a delivery controller on the other side of the line. Click Add.

 

After clicking Add. the screen looks like the above image (but probably with an other FQDN). if you don’t want to add another delivery controller click Next to continue.

 

Now were able to select features, everyone wants to optimize performance so that will be an easy pick 😉 . Remote assistance is needed if we want to use the shadow functionality from Citrix Director. Real-Time audio kind of explains itself. Framehawk is new transport protocol optimized for low bandwidth WAN connections. In my case I want everything but Framehawk.

Select the features you want and click Next to continue.

 

The can create firewall rules for windows firewall. If you are running Windows firewall you can let the setup do its thing. Otherwise you have to make the rules yourself. Click Next to continue.

 

The setup has all information to start installing. It politely shows what its going to install on your system. Click Install to start installing.

 

The installation will take up a good 10 minutes (depending on your system). we have to wait till its finished.

 

After installing the setup wants you to enable call home, the Citrix telemetry service. I don’t need nor want telemetry services so thanks but no thanks. Click Next to continue.

 

Now the setup is truly done. After restarting the system will be good to go. Click Finish to restart the machine.

 

Now were done on the VDI side of things. We will continue the operation on the delivery controller with Citrix Studio.

In Citrix Studio, navigate to the Machine Catalogs tab and click the “Create Machine Catalog” link on the right side of the screen. This will open the Machine Catalog Setup wizard.

 

Click Next to continue.

 

Were adding a Desktop OS system so select the “Desktop OS” radio button and click Next to continue.

 

Since we only want one VDI and were not letting Xendesktop power manage it we select the appropriate radio buttons (no power management and another service or technology). Click Next co continue.

 

Since we only have one VDI and we might have multiple users we will be using a random pool. Click Next to continue.

 

We have to define the VDI’s we want to add to the pool. This is not necessary if your using an other provisioning technology like PVS or MCS. Click “Add computers…” type the name of your computer and Click OK. Since were using the latest version of Xendesktop we can live the functional level the way it is. Click Next to continue.

 

Type a name for the Machine Catalog and click Finish.

 

We have now created a Machine catalog with a machine in it 🙂 . Lets add it to a delivery group so we can start it from the Receiver.

 

Navigate to the “Delivery Groups” tab in Citrix Studio and click the “Create Delivery Group” link on the right side of the screen to.

 

At the “Create Delivery Group” wizard introduction page, click Next to continue.

 

At the Machines tab, select the machine catalog we created earlier, set the number of machines to 1 (we only have one machine in the catalog) and click Next to continue.

 

At the Users tab, we can chose to limit the availability of the desktop group to specific users and or groups. In this lab setup all users may access the delivery group so we just click Next to continue.

 

If we want to publish applications, this is the place to do it. Since we want to publish the entire desktop we wont publish applications. Click Next to continue.

 

Because a delivery group can house multiple types of desktops (for example desktops with and without GPU acceleration), we have to at least one desktop assignment. Click the “Add…” button to continue.

 

Type in a display name for the delivery group, this is the name the end user will see in the Receiver. We can restrict this desktop assignment rule to desktops with a specific tag. Tags can be added for individual desktops from the search tab in Citrix Studio. We can restrict which users will be able to use the desktops, in this case all users are allowed to use the published desktop. Click OK to continue.

 

One desktop assignment added. Click Next to continue.

For some strange reason I forgot to take a screenshot of the Summary page. Luckily the summery page displays no new information.

 

 

Assuming we have clicked the Finnish button on the summary page, we are now back at Citrix studio. It shows the newly added delivery group, so we can go and see if the desktop is added to the Receiver.

 

Yep, its showing up on the Receiver side of things. Lets fire it up.

 

One windows 10 Desktop sitting happily in a webbrowser, i love progress 🙂 .

 

Citrix Xendesktop 7.17 part 2: Configuring storefront for the HTML5 Reciever

In the first part of this series of articles about Xendesktop, we have installed a delivery controller and configured a empty site. In part 2 we will be configuring Citrix Storefront for use with the Receiver for web application and we will setup the necessary policy’s. This will greatly improve the enviroment for testing purposes because we wont have to install or reconfigure the Citrix Receiver application every time we want to test something.

When we start Citrix studio, there are two snap-ins loaded: Citrix Studio and Citrix storefront. When we click on The storefront snap-in we get the option to view or change stores or create a new store. Citrix adds a store by default so Click on “View or Change Stores“.

 

The next screen gives a list of available stores and some basic information about the selected store. We want to enable the HTML5 Receiver. to do this, Click the “Manage Receiver for Web Sites” link on the lower right site of the screen.

 

After clicking the link the above window appears. Click the Configure… button to continue.

 

Select the “Deploy Citrix Receiver” tab and choose “Always use Receiver for HTML5” at the Deployment options drop down.

 

Click on the “Client Interface Settings” tab and deselect “Auto launch desktop” under Web sessions, otherwise every time you open the receiver page it will attempt to launch the first available desktop. Click OK to close the dialog. Click Close on the next dialog.

 

Now we want to setup the default domain, this step is not necessary but it will save the trouble of retyping the domain name every time we want to login to the receiver. Click “Manage Authentication Methods” at the bottom right of the screen.

 

Click the gear icon next to the “User name and password” method and click on “Configure Trusted Domains”.

 

Select “Trusted domains only” if you only want to allow logon attempts for trusted domains. Click on Add… and type the name of your domain. Click OK to close the dialog. If you have multiple domains you can check the show domains list in logon page checkbox. Click OK to close the dialog and click OK again to close the second dialog.

Now we have Storefront ready to go, we still have to set a policy to allow WebSocket connections to all Xendesktop and Xenapp systems. Without WebSockets enabled we wont be able to use the HTML5 Receiver.

In Citrix Studio, navigate to the Policies section and click Close to close the welcome message.

 

This section of Citrix Studio gives an overview over the applied policies. We want to create a new policy to do this, click the Create Policy link at the right side of the screen.

 

After clicking the Create Policy button, the Create Policy wizard pops up. We want to enable WebSockets, so we search for “webs” or something like it. There are 3 settings shown: Allow connections, port numbers and trusted origins. for the testing purposes we only need to enable WebSockets. Press the Select link after the WebSockets connection setting.

 

At the Edit Setting dialog Click the Allowed radio button and click OK. Click Next on the Create Policy wizard to continue with the wizard.

 

Click the “All objects in the Site” radio button on the “Users and Machines” tab. If you only want to use the HTML5 receiver on specific systems or desktop groups you can also do this with the “Selected users and machine objects” option. But in this case we want all systems to be able to use WebSockets. Click Next to continue.

 

Add a name for the Policy and check the Enable policy checkbox. Click Finish to activate the policy.

 

Now we can logon to StoreFront. browse to the web-store URL and login with an AD account. You will be presented with the above screen. Since we don’t have applications or desktops the screen is empty.

In the next part we will be adding a Desktop to the mix.

Citrix Xendesktop 7.17 part 1: Setting up the controller

Citrix Xendesktop is the remote desktop solution from Citrix. Since version 7 it is completely integrated with Xenapp, the terminal server like solution from Citrix. Together these solutions are capable to overcome (almost) every workplace virtualization challenge.

I have not updated my home lab enviroment in a long time, it was still running Xenapp 7.5. And because I have a number of Xendesktop projects planned I will be setting up a brand new Xendesktop 7.17 enviroment, this setup will include all Xenapp components.

This article is a step by step installation guide for Xendesktop 1.17, lets get started.

 

After mounting, extracting or inserting the Xenapp/Xendesktop installation media and starting the installer we are presented with the above screen. Click the Start button next to “XenDesktop”.

 

Depending on the OS your using this screen may look slightly different. We want to install a new delivery controller so we will click on the Delivery Controller button.

 

After a short load time we are presented with the license agreement. click on the accept radio button and click Next.

 

We want to install all roles on the same server, click Next to continue.

 

I wont be using a separate MS SQL server so we can let the installer install a Express edition, click Next to continue.

 

The installer can open all needed default ports in Windows firewall, click Next to continue.

 

Click Next to continue.

 

Wait while the installation progresses, this may take a lot longer than 2 minutes.

 

The system needs to restart during the installation of the prerequisites, click Close to restart the system. After rebooting log in onto the machine and wait till the installation continues.

 

The setup continues where it left off.

 

Smart tools contain the recently introduced “cloud” features, i don’t need them in my lab environment so I select the last radio box and click Next

 

The the setup completed the installation, click Finish to start Citrix Studio.

 

We need to set up a site before we can add desktops/apps, etc. click on “Deliver applications and desktops to your users”.

 

Create an empty site, type a name and click Next.

 

 

The default database names are good enough for me and since we are using the default MS SQL Express installation the location is also good enough. Click Next to continue.

 

Because we have just installed the License server, the licenses have not yet been added. So we will use the trail for now. Click Next to continue.

 

Click Finish to continue.

 

Citrix Studio after setting up a site.

Now we have a empty site, we can set up storefront and add desktops/applications to the site but this will have to wait to the next part of this guide.

 

 

Citrix Xenserver and pfSense, slow traffic problems

A while back, I added a pfSense installation to my home lab environment.

pfSense is a opensource network firewall/router operating system which offers a large amount of additional (3e party) modules to extend its functionality such as OpenVPN, Snort, etc. At this time, I am mainly interested in the OpenVPN functionality and I would very much like to play around with the ability to set up a IDS (Intrusion Detection System).

With pfSense being a software only implementation, I started by setting up a VM on my home lab Xenserver and initially I was not disappointed. The installation went very smoothly and in less than 20 minutes after browsing to the pfSense download page, I found myself staring at the pfSense web interface.

I completed setting up a number of basic rules, and performing some basic tests (like pinging internal and external addresses) without incidents, the problems started when i tried to open a webpage through the pfSense NAT. My browser was unable to open the page (located on an other test server). a number of troubleshooting steps later I was able to verify the connections between my PC and the pfSense VM, and the connection between the pfSense VM and the webserver VM were both working as expected, the rule-sets on the pfSense VM ware also configured correctly.

While searching on the web for a solution to my problem I stumbled onto a very interesting topic on the pfSense forums.

pfSense is based on FreeBSD, and FreeBSD wont accept traffic if the checksum on the TCP packet is not valid. to solve this problem two steps are needed.

1. Hardware checksum offloading needs to be disabled in the pfSense configuration. To achieve this navigate to “System > Advanced > Networking” in the pfSense interface and enable the “Disable hardware checksum offload” option.

2. Hardware checksum offloading needs to be disabled on the pfSense VM virtual interfaces. To achieve this we first need to know the interface uuid’s of the interfaces. Use the following command on the Xenserver CLI interface to get the UUID’s:

xe vif-list vm-name-label=<pfSense vm name case sensitive>

The UUID’s can be found after the “uuid ( RO) :” label.

to disable to hardware checksum offloading we need the following commands:

xe vif-param-set uuid=<UUID> other-config:ethtool-tx="off"
xe vif-param-set uuid=<UUID> other-config:ethtool-rx="off"

I was able to get by with only the tx disabled, but I mainly tested with download traffic so some upload testing is still required.

 

Add a local storage repository (SR) to Xenserver

Recently i added a new SSD to my home lab hypervisor.

Xenserver does not detect new disks by default, so I had to add the drive manual, luckily this is not a complicated process (some command line experience is required).

First we need to find the device name of the new drive, for this we use following command:

fdisk -l

Usually the last shown device is the newly added one, in my case it was /dev/nvme0n1 as I was adding a NVME drive, normally it will be something like /dev/sdb, where b is the position of the drive (first drive is sda, second is sdb, etc.).

Now we know the which drive we are adding to the system we can choose how we want to format the drive. We can choose between EXT and LVM, in short LVM is faster but less flexible (no thin provisioning). Usually you will want to keep all your local repository’s the same.

In this case I have chosen to go with EXT because i need the flexibility and the speed is good enough for me.

Now we can add the repository, we will use the following command:

xe sr-create name-label=<your SR name> shared=false device-config:device=/dev/<device name> type=<lvm|ext> content-type=user

 

 

Collabora CODE, a opensource selfhosted webbased officesuite with Nextcloud integration

If your reading this article, you probably know Microsoft office. And you probably also know about Microsoft’s move to the cloud. Today I can tell you one of Microsoft’s competitors has made the same move, and its not Google.

LibreOffice is well known for its open source and free office solution unfortunately LibreOffice always needed to be installed on a Windows, Linux or MacOS operating system. Because of this it was not usable on Android, IOS and ChromeOS (and probably a number of other operating systems).

Around 2012 LibreOffice started a side project to solve this issue once and for all. This project aimed to run LibreOffice in a webbrowser, enabling all devices with a webbrowser to use this amazing piece of software.

Unfortunately this project was abandoned somewhere along the way and this amazing software never saw the light of day. And with the death of the LibreOffice web project the dream for real productivity where you are the owner of your own data also died, or so I thought.

Recently (a couple of months back but I did not have time to try it sooner) I discovered Collabora CODE. Collabora is a company which created a webbased LibreOffice, and found some spare time to integrate there product with a number of other applications. One of those applications is Nextcloud.

In this article I will outline the steps needed to install your very own Collabora CODE installation (with LetsEncrypt)  and integrate this installation  with Nextcloud, all of this without using Docker.

As with the WeKan tutorial, I am using 2 Ubuntu 16.04 Servers. One with the Nextcloud installation (WEB) and the other one with the Collobara CODE installation (CODE). This setup differs a bit from the tutorial on the Collabora webpage, but I think it is better to have your services spread out over a number of servers, rather than have them all on one server. This approach will complicate things a little. Since I only have one public IP address available I will have to use a reverse proxy to make this setup work from outside my LAN.

So first of all we have to install Collabora CODE on the “CODE” server. For this part we simply follow the instructions From the Collabora page:

# import the signing key
$ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0C54D189F4BA284D
# add the repository URL to /etc/apt/sources.list
$ echo 'deb https://www.collaboraoffice.com/repos/CollaboraOnline/CODE ./' >> /etc/apt/sources.list
# perform the installation
$ apt-get update && apt-get install loolwsd code-brand

When this is done we have to make sure the CODE box will communicate with the revere proxy over http (we have only one IP so no Certificates on the CODE box).  And while where editing the Collabora CODE configuration we might as well make all the necessary changes.

On the CODE box, open the Collabora CODE configuration file:

$ sudo nano /etc/loolwsd/loolwsd.xml

And make the following changes:

<!-- First we set the hostname of the collabora server. Find the following line and change the "(your collabora url office.examle.com)" to your needs  -->
<server_name desc="Hostname:port of the server running loolwsd. If empty, it's derived from the request." type="string" default="">(your collabora url office.examle.com)</server_name>

<!-- Now find the  "<ssl desc="SSL settings">" part and edit the first 2 lines to reflect the below ones-->
<enable type="bool" default="true">false</enable>
<termination desc="Connection via proxy where loolwsd acts as working via https, but actually uses http." type="bool" default="true">true</termination>

<!-- Now we need to configure the backend document storage, Find the  "<storage desc="Backend storage">" part and add 2 rules to the "WOPI" part to allow your Nextcloud domain and the Nextcloud server IP to access to CODE box -->
<!-- Make shure you escape all "." with a "\" -->
<!-- the new rules must look like this (with your extual IP and dns names of course) -->
<host desc="Regex pattern of hostname to allow or deny." allow="true">nextcloud\.example\.com</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">192\.168\.x\.x</host>

<!-- Last but not least, add the web admin console login data to the config -->
<!-- find the "<admin_console desc="Web admin console settings.">" part and edit the following 2 lines to your needs -->
<username desc="The username of the admin console. Must be set.">(loginusername)</username>
<password desc="The password of the admin console. Must be set.">(supersecretpassword)</password>

Now restart the Collabora CODE service with the following command:

$ sudo service loolwsd restart

Now we move our attention to the WEB box. Here we will need to set up our reverse proxy, install the SSL certificates and make the CODE -> Nextcloud connection.

We will start with the reverse proxy (this part is almost the same as with the WeKan tutorial).

First we need to enable the Apache reverse proxy, SSL and header mods (if the are not enabled already).

$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
$ sudo a2enmod headers
$ sudo a2enmod ssl
$ sudo service apache2 restart

When this is done we will create a new virtual server config for the http version of the Collabora CODE installation, this virtual host will only be used for the Lets encrypt certificate validation, so int wont actually point to the Collabora CODE installation.

# first create a directory for the virtualhost and set the webserver user as owner
$ sudo mkdir /var/www/code
$ sudo chown www-data:www-data /var/www/code

# now create the site config
$ sudo nano /etc/apache2/sites-enabled/code.conf

# input the followin (change where needed)
<VirtualHost *:80>
    ServerAdmin <youre email address>
    DocumentRoot /var/www/code
    servername <website adres office.example.com>
    ErrorLog ${APACHE_LOG_DIR}/code_error.log
    CustomLog ${APACHE_LOG_DIR}/code_access.log combined
</VirtualHost>

Now we can run the Lets encrypt cert-bot and follow the instructions onscreen (if you don’t have cert-bot installed, you can install it by typing “sudo apt-get install certbot”):

$ sudo certbot

The Cert-bot wizard will have created a file named “code-le-ssl.conf”. Open this file with your favorite editor (I use nano) and edit it to reflect the following (make changes where needed):

<IfModule mod_ssl.c>
  <VirtualHost *:443>
    ServerName <youre_site_hostname>
    SSLEngine on
    SSLProxyEngine on
    ProxyPreserveHost on
    SSLCertificateFile /etc/letsencrypt/live/<youre_site_hostname>/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/<youre_site_hostname>/privkey.pem
    Include /etc/letsencrypt/options-ssl-apache.conf
    ProxyPass / http://<code_ip>:9980/
    ProxyPassReverse / http://<code_ip>:9980/
    Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains;"
    <Proxy *>
      Order deny,allow
      Allow from all
    </Proxy>
    ServerAdmin <your_email>
    ErrorLog ${APACHE_LOG_DIR}/code_error.log
    CustomLog ${APACHE_LOG_DIR}/code_access.log combined
</VirtualHost>
</IfModule>

Now restart apache2 on the webserver:

$ sudo service apache2 restart

Because you wont be visiting the Collabora CODE installation manual it is not necessary to create a redirect for the http url.

We now have a Collabora CODE installation with LetsEncrypt certificate ready to go. All that left is to connect the Collabora CODE installation to the Nextcloud installation.

To Enable the link to Nextcloud we first have to install the Colabora online app in Nextcloud. This app can be found under “Office & text” in the applications section of your Nextcloud installation to enable this app click the Enable button. If the app is enabled it will look like the picture below

Enabled Collabora Online app

After enabling the app, an extra settings item will be added to the Nexcloud admin panel. Go to the Admin panel and click the “Collabora Online” menu item. Here You can add the external url to your Collabora CODE installation. This is the same URL we added to the Collabora CODE configuration (office.examle.com). This will look like the picture below:

Collabora Online Settings

 

If all went well, you will now be able to create and open most office files from your Nexcloud installation and edit them directly from your webbrowser.

 

Images by ColoboraOffice.

source: https://en.wikipedia.org/wiki/Kanban_board

kanban with wekan: self-hosted trello-like web-app

Almost everyone knows the online productivity platform Trello. Trello is a Kanban tool and can be used virtually every problem as long as the problem involves tracking something. Whether you are trying to keep track of time, tasks or inventory, Kanban is a good way to go.

Kanban was was originally designed as a inventory control system, to help automate production lines in factory’s. Over the years countless tools (like Trello) have been created to help with using Kanban for various purposes. Nowadays lots and lots of people use those Kanban tools without knowing anything about the Kanban method. And for most of them, this works just fine.

This post is not about the Kanban method, its not even about a Kanban tool. It is about setting up a self-hosted Kanban tool.

Now the first question is, why would you host you’re own Kanban tool while there are plenty free tools on the marked? Well, it mostly comes down to security, trust and privacy. When I’m working on a project, I want to be able to put sensitive data on my carts. While this data often is not interesting for a company like Trello it might be interesting for other company’s. In my opinion the question you should be asking yourselves is, do you trust a company like Trello with the data you want to put on to there service, do you expect them not to make money of a “free” service and can this impact your business in a negative way.

So after using Trello for some time, i started looking into a good self-hosted solution. I tried a number of Kanban tools, and eventual chose for Wekan.

Source: https://wekan.github.io/
Image source: https://wekan.github.io/

Of course self-hosted applications wont work directly after you have chosen to use them, you need to run them on your own system/server (or on a cloud-hosted system, VPS, etc.).

You can get Wekan from there releases page, they serve Wekan in 4 flavors:

  • A docker container
  • A sandstorm App
  • A Ubuntu Snap package
  • A virtualbox image

because I was already planning on using a Ubuntu server for this project, the Snap package was the logical choice.

Snap packages are super easy to set up, you basically execute the command:

$ sudo snap install <snap_package_name>

On the Wekan wiki there is a short and to the point installation tutorial, but for basic installations the following will be sufficient:

$ sudo snap install wekan

After the above command completes, Wekan will be running at “http://<ip_of_the_system>:8080”.

When you navigate to the fresh Wekan installation, you don’t have a user account to log in with. in stead use the build in registration feature on the login page. The first user you will create will automatically be a administrator.

Great, now we have a working local Wekan installation. The next step is to make this installation available to the big and scary world wide web. And because we font want to send our sensitive data over the internet unencrypted, we will also implement TLS with Lets encrypt

In my situation, the server running Wekan and the main web server are 2 separate Ubuntu installations. For this to work we will use a reverse proxy, and because my main web server runs Apache it will be a Apache reverse proxy. If you are using the same installation for both, the next part will not be needed, see the Wekan wiki for more information on this.

First we need to enable the Apache reverse proxy, SSL and header mods (if the are not enabled already).

$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
$ sudo a2enmod headers
$ sudo a2enmod ssl
$ sudo service apache2 restart

When this is done we will create a new virtual server config for the http version of the Wekan installation, this virtual host will only be used for the Lets encrypt certificate validation, so int wont actually point to the Wekan installation.

# first create a directory for the virtualhost and set the webserver user as owner
$ sudo mkdir /var/www/wekan
$ sudo chown www-data:www-data /var/www/wekan

# now create the site config
$ sudo nano /etc/apache2/sites-enabled/wekan.conf

# input the followin (change where needed)
<VirtualHost *:80>
        ServerAdmin <youre email address>
        DocumentRoot /var/www/wekan
        servername <website adres wekan.example.com>

        ErrorLog ${APACHE_LOG_DIR}/wekan_error.log
        CustomLog ${APACHE_LOG_DIR}/wekan_access.log combined
</VirtualHost>

Now we can run the Lets encrypt cert-bot and follow the instructions onscreen (if you don’t have cert-bot installed, you can install it by typing “sudo apt-get install certbot”:

$ sudo certbot

The Cert-bot wizard will have created a file named “wekan-le-ssl.conf”. Open this file with your favorite editor (I use nano) and edit it to reflect the following (make changes where needed):

<IfModule mod_ssl.c>
    <VirtualHost *:443>
        ServerName <youre_site_hostname>
        SSLEngine on
        SSLProxyEngine on
        ProxyPreserveHost on
        SSLCertificateFile /etc/letsencrypt/live/<youre_site_hostname>/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/<youre_site_hostname>/privkey.pem
        Include /etc/letsencrypt/options-ssl-apache.conf

        ProxyPass / http://<wekan_ip>:8080/
        ProxyPassReverse / http://<wekan_ip>:8080/
        Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains;"
        <Proxy *>
            Order deny,allow
            Allow from all
        </Proxy>

        ServerAdmin <your_email>

        ErrorLog ${APACHE_LOG_DIR}/wekan_error.log
        CustomLog ${APACHE_LOG_DIR}/wekan_access.log combined
</VirtualHost>
</IfModule>

Now restart apache2 on the webserver:

$ sudo service apache2 restart

You can add a redirect to the http version of the site, so you wont have to type “https://” every time you want to visit your Wekan installation.

# create a redirect index.html
$ sudo nano /var/www/wekan/index.html

# insert the following (edit where needed)

<HTML>
<head>
<meta http-equiv="refresh" content="0;URL=https://<your_domain_here>/" />
</head>
<body>
nothing to see here, go to <a href="https://<your_domain_here>/">this location</a> for a more useful page.
</body>
</HTML>

After the above changes we need to let Wekan know its new url (the one you will be using to access Wekan), this can be done using the following command:

$ sudo snap set wekan root-url="https://<your_wekan_url>"

After this last change, the Wekan service needs a restart:

$ sudo systemctl restart snap.wekan.wekan

 

Congratulations, You now have a working Wekan installation which is accessible from the internet.

 

Having the Wekan installation up and running is one thing. Maintaining it is another. Luckily over at Wekan there are a number of articles for maintaining your Wekan server, one of the most important ones is “backup and restore“. From time to time you might want to update your Wekan installation. Upgrading a snap packet is just as easy as installing it, just enter the following command:

$ sudo snap refresh wekan

 

Have fun with your Wekan installation

 

 

featured image from: https://en.wikipedia.org/wiki/Kanban_board

This theme (and how it came to be)

After about 2.5 years of not having a blog I thought it was time to give it another try. My last blogging attempt failed because I did not have the time to post something interesting from time to time. The blog itself also looked horribly bad with a black theme and orange accents it was so bad I did not keep a copy of the theme when I finally pulled the plug. At the time I was unable to make up my mind about the language I would write articles in, so I ended op with a combination of Dutch and English articles both of which where completely outdated by the time I decided to shut down my blog.

This time around I will put in a lot more effort in this blog. of course every good blog starts with a good theme. in this case I was searching for a semi serous theme. I found what I was looking or with Techieblog(by wpcrumbs). The image below contains a screenshot of the original theme which I really liked although I had some issues with the 3 static images on the front page, the almost useless sidebar (for my purposes) and the really crappy horizontal menu with vertical text outlining.

As can be seen I’ve made a number of changes to the theme:

  • changed the 3 static image’s header on the frontage to a most recent posts slider.
  • changed the design from 2 columns to 3 columns.
  • changed the menu so it would be vertical orientated on the right side of the screen.
  • moved the sidebar to the menu.
  • removed the related posts section.
  • changed the theme overall bright color scheme to a much darker scheme (which reads better in my opinion).

I started of with changing the header on the front page to a slider. To do this I installed the Smart Slider plugin form the WordPress plugin page and used it to create a new slider for the front page. With the shortcode for the new slider in had I opened the header file on the WordPress theme and changed the code for the 3 static images to the shortcode for the slider.

//OLD
<div class="list list-hover">
    <div class="col-sm-4 col-xs-12">
        <a href="<?php echo get_theme_mod('techieblog_section_one_link_one'); ?>" style="<?php echo esc_attr(techieblog_featured_articles_item_style('techieblog_section_one_img_one')); ?>">
          <?php if (get_theme_mod('techieblog_section_one_title_one')) { ?>
            <h2><?php echo get_theme_mod('techieblog_section_one_title_one'); ?></h2>
          <?php } ?>
        </a>
    </div>

    <div class="col-sm-4 col-xs-12">
        <a href="<?php echo get_theme_mod('techieblog_section_one_link_two'); ?>" style="<?php echo esc_attr(techieblog_featured_articles_item_style('techieblog_section_one_img_two')); ?>">
          <?php if (get_theme_mod('techieblog_section_one_title_two')) { ?>
            <h2><?php echo get_theme_mod('techieblog_section_one_title_two'); ?></h2>
          <?php } ?>
        </a>
    </div>

    <div class="col-sm-4 col-xs-12 clearfix">
        <a href="<?php echo get_theme_mod('techieblog_section_one_link_three'); ?>" style="<?php echo esc_attr(techieblog_featured_articles_item_style('techieblog_section_one_img_three')); ?>">
          <?php if (get_theme_mod('techieblog_section_one_title_three')) { ?>
            <h2><?php echo get_theme_mod('techieblog_section_one_title_three'); ?></h2>
          <?php } ?>
        </a>
    </div>

</div>

//NEW
<div class="list list-hover">
<?php
  echo do_shortcode(<your smart slider shortcode>);
?>
</div>

Next on the list was the 2 to 3 column design this only required some simple changes to the themes main CSS file:

/*OLD*/
.content-area.content-sidebar {
 width: calc(100% - 260px);
 //<omitted>
} 

#blog-isotope-masonry article {
  width: calc(50% - 20px);
  //<omitted>
}

/*NEW*/
.content-area.content-sidebar {
 width: 100%;
 //<omitted>
}

#blog-isotope-masonry article {
  width: calc(33% - 20px);
  //<omitted>
}

Next up was the menu, this part needed a little more love because it clearly did not get much attention in the original theme.

of course it would be easyser to add an ID to the nav section I was working on, so I edited the header file once again:

//OLD
<nav class="pull">

//NEW
<nav class="pull" id="pull">

The new ID was coming handy while editing the JS file (js/customizer.js) which handled the sliding effect of the menu.

//OLD
jQuery("#nav-toggle").on("click", function (e) {
  // add active class
  jQuery(this).toggleClass("active");
  // pull down the menu
  jQuery('.pull').slideToggle();
  //<omitted>
});

//NEW
jQuery("#nav-toggle").on("click", function (e) {
  // add active class
  jQuery(this).toggleClass("active");
  if (jQuery(this).hasClass("active")) {
    document.getElementById("pull").style.width = "250px";
  } else {
    document.getElementById("pull").style.width = "0";
  }
  //<omitted>
});

After some CSS changes the menu looks much nicer, has support for different item levels and takes up much less space.

/*REMOVED*/
.pull {
  display: none;
}

/*ADDED*/
#nav-toggle {
  /*<omitted>*/
  z-index: 2;
}
.sub-menu
{
  margin-left: 10px!important;
  padding: 5px 0 5px 0!important;
}
.sub-menu li
{
  border-left: 1px solid rgba(255, 255, 255, 0.6);
}

/*OLD*/
.pull {
  background-color: #1a1a1a;
  max-height: 100vh;
  margin-top: -1px;
  overflow-y: auto;
}
.pull div ul {
  list-style: none;
  width: 50%;
  min-width: 200px;
  padding: 30px 0 30px 60px;
}
.pull div ul li {
  color: rgba(255, 255, 255, 0.6);
  display: block;
  transition: all 0.5s ease;
  padding-left: 0;
  position: relative;
  border-bottom: 1px solid #7c7c7c;
}
.pull div ul li a {
  color: #fff;
  text-transform: uppercase;
  display: block;
  transition: all 0.5s ease;
  padding: 20px 20px 20px 0;
  position: relative;
}

/*NEW*/
.pull {
    height: 100%;
    width: 0;
    position: fixed;
    z-index: 1;
    top: 0;
    right: 0;
    background-color: rgba(0, 0, 0, 0.4);
    text-shadow: .3px .3px #000;
    overflow-x: hidden;
    transition: 0.5s;
    padding-top: 60px;
}
.pull div ul {
  list-style: none;
  width: 100%;
  min-width: 200px;
  padding: 30px 0 30px 0;
  margin:0;
}
.pull div ul li {
  color: rgba(255, 255, 255, 0.6);
  display: block;
  transition: all 0.5s ease;
  padding-left: 0;
  position: relative;
  border-bottom: 1px solid border-bottom: 1px solid rgba(255, 255, 255, 0.6);
}
.pull div ul li a {
  color: #fff;
  text-transform: uppercase;
  display: block;
  transition: all 0.5s ease;
  padding: 5px 5px 5px 20px;
  position: relative;

}

Because I kind of removed the place for the sidebar when I made the frontpage 3 columns, it would only be fair to move it to a place where it would not be in the way the menu for example, so after removing the sidebar from the theme footer page, I placed it in the menu section.

//OLD:
<nav class="pull" id="pull">
  <?php wp_nav_menu(array('theme_location' => 'primary', 'menu_id' => 'primary-menu', 'menu_class' => 'top-nav')); ?>
</nav>

//NEW
<nav class="pull" id="pull">
  <?php wp_nav_menu(array('theme_location' => 'primary', 'menu_id' => 'primary-menu', 'menu_class' => 'top-nav')); ?>
  <?php get_sidebar(); ?>
</nav>

Another thing I thing is very annoying in the kind of blog I’m working on are related posts. although the idea of having related posts show up under every post is good by itself, the implementation is often very bad. so not having this functionality altogether is often a better approach. while I was ad it, I also removed the post navigation (next and previous post) from the single.php file.

//REMOVED

the_post_navigation();

if (!is_page()): ?>
  <?php techieblog_related_posts(); ?>
<?php endif;

And finally it is time for the big one. All other CSS changes to make the theme more to my liking. This changes mostly come down to changing colors and making things transparent. But I also had to realign some parts because some transparent parts ware overlapping.

/*OLD*/
#hero {
  min-height: 200px;
  max-height: 300px;
  /*<omitted>*/
}
.hentry {
  background: #fff;
  /*<omitted>*/
}
.shape-top-left {
  top: -20px;
  background: #fff;
  /*<omitted>*/
}
.shape-top-left:after {
  border-color: transparent transparent transparent #fff;
  /*<omitted>*/
}
.shape-top-right {
  top: -10px;
  background: #fff;
  /*<omitted>*/
}
.shape-top-right:after {
  border-color: transparent transparent #fff transparent;
  /*<omitted>*/
}
.shape-bottom-left {
  bottom: -10px;
  background: #fff;
  /*<omitted>*/
}
.shape-bottom-left:after {
  border-color: #ffffff transparent transparent transparent;
  /*<omitted>*/
}
.shape-bottom-right {
  bottom: -20px;
  background: #fff;
  /*<omitted>*/
}
.shape-bottom-right:after {
  border-color: transparent #ffffff transparent transparent;
  /*<omitted>*/
}
body {
  color: #555;
  /*<omitted>*/
}
#blog-isotope-masonry article {
  padding: 60px;
  margin: 0 20px 50px 0;
  background: #fff;
  /*<omitted>*/
}
#blog-isotope-masonry article [class*="shape-"] {
  background: #fff;
}
#blog-isotope-masonry article .shape-top-left {
  top: -10px;
  width: 30px;
}
#blog-isotope-masonry article .shape-top-right {
  top: -5px;
}
#blog-isotope-masonry article .shape-bottom-left {
  width: 30px;
  height: 15px;
  bottom: -10px;
}
#blog-isotope-masonry article .shape-bottom-right {
  bottom: -15px;
  width: 45%;
  height: 25px;
}
.entry-title a {
  color: #444;
}

/*NEW*/
#hero {
  min-height: 100px;
  max-height: 150px;
  /*<omitted>*/
}
.hentry {
  background: rgba(50, 50, 50, .6);
  /*<omitted>*/
}
.shape-top-left {
  top: -30px;
  background: rgba(50, 50, 50, .6);
  /*<omitted>*/
}
.shape-top-left:after {
  border-color: transparent transparent transparent rgba(50, 50, 50, .6);
  /*<omitted>*/
}
.shape-top-right {
  top: -20px;
  background: rgba(50, 50, 50, .6);
  /*<omitted>*/
}
.shape-top-right:after {
  border-color: transparent transparent rgba(50, 50, 50, .6) transparent;
  /*<omitted>*/
}
.shape-bottom-left {
  bottom: -20px;
  background: rgba(50, 50, 50, .6);
  /*<omitted>*/
}
.shape-bottom-left:after {
  border-color: rgba(50, 50, 50, .6) transparent transparent transparent;
  /*<omitted>*/
}
.shape-bottom-right {
  bottom: -30px;
  background: rgba(50, 50, 50, .6);
  /*<omitted>*/
}
.shape-bottom-right:after {
  border-color: transparent rgba(50, 50, 50, .6) transparent transparent;
  /*<omitted>*/
}
body {
  color: #777;
  /*<omitted>*/
}
#blog-isotope-masonry article {
  padding: 40px;
  margin: 40px 20px 50px 0;
  background: rgba(50, 50, 50, .6);
  /*<omitted>*/
}
#blog-isotope-masonry article [class*="shape-"] {
  background: rgba(50, 50, 50, .6);
}

#blog-isotope-masonry article .shape-top-left {
  top: -30px;
  width: 30px;
}
#blog-isotope-masonry article .shape-top-right {
  top: -20px;
}
#blog-isotope-masonry article .shape-bottom-left {
  width: 30px;
  height: 20px;
  bottom: -20px;
}
#blog-isotope-masonry article .shape-bottom-right {
  bottom: -30px;
  width: 45%;
  height: 30px;
}
.entry-title a {
  color: #444;
    text-shadow: .3px .3px #000;
}

Now that were all set up there is only one question remaining. This are instructions for changes to the main theme, why did you not create a child theme an made the changes to that child theme?

Well there is a good reason for that, I know it is best practice to make a child theme, so you can continue to receive updates for the parent theme without losing all your changes. Unfortunately I was not able to make a number of changes to the child theme, some parts of the theme turned white and other changes where ignored altogether. this might have to do with the way the original theme was constructed or it might be that I am not very good at CSS (sysadmin not equals web designer). Either way I have created a totally separate theme with Techieblog as foundation. This will make sure I wont receive theme updates but I will have to deal with any problems, that might arise, myself.

image form: http://www.psdtowordpressexpert.com

Repairing Windows Software RAID 1, not as easy as you might expect

At work, we have a number of servers which use the default windows software RAID implementation to insure OS disk redundancy (RAID 1). whether Software RAID 1 is a good solution for the situation is debatable, but this is not part of the scope for this article.

One Thursday morning I noticed one of the OS disks on the SQL server was showing up as missing in Windows disk manager. The server was still running fine, this is why we use RAID 1 in the first place. Of course the defective drive needed to be replaced as soon as possible. So I went out, bought a new drive and told my colleagues the system would not be available in the evening.

Normally, swapping a drive is super easy. You just replace the defective drive, tell windows to drop the missing drive from the mirror and create a new mirror with the still working drive and the new drive.

Unfortunately Windows disk manager came up with the following error after trying to create a new mirror: “All disks holding extends for a given volume must have the same sector size, and the sector size must be valid.“. bummer, after some googling it became apparent it would not be possible to create a mirror using this combination of disks. Luckily it would be able to clone the old disk to the new disk. Clonezilla to the rescue, or not…

After booting from my Clonezilla thumb-drive and walking trough the disk cloning wizard I got an error stating that the disk could not be cloned to the new drive because the new drive is 5MB smaller. Well that’s not great news, but I could still clone the separate partitions and fix the MBR using the windows installer thumb-drive. So I booted back in to windows, shrunk the data partition about 100MB, booted back in Clonezilla and started the partition clone. About one hour later the cloning process was finished. Unfortunately I was unable to fix the MBR because the BootRec utility was not able to find the windows instillation.

At this point it was somewhere around 1 AM, the server in question needed to be working at 7AM so I was beginning to get a little nervous. I could boot the old, working drive and let it run for another day, but whit a 7 year old disk this would be a big risk. so an other solution was desirable.

My next attempt to clone the disk turned out to be a good move. I used a very useful sysinternals utility named Disk2vhd this utility is able to clone a physical disk to a VHD or VHDX it can even backup the OS disk of a system while the system is running, by using shadowcopy. It took about 45 minutes but after that time I had my VHDX file. Unfortunately the utility (Vhd2disk) I used to restore the image to the new disk only accepted VHD files so I needed to run Disk2vhd again. Unfortunately (I use that word often in this post) Vhd2disk was unable to write back to the physical disk. I ran it twice, it crashed both times just before the process ended, my guess is it would not complete for the same reason Clonezilla could not perform a full disk clone.

Just after the second attempt to write the VHD to disk, the old working disk breaks-down. At this point it is around 4AM with 3 hours left and no working drive to fall back to, its time for some drastic measures.

I wrote the VHD containing the os disk (osVHD) to another RAID 1 setup this one mainly contains the database files, being unable to boot the server I took both drives and placed them in separate systems. On the first system I started the import process to our hypervisor cluster (Xenserver), on the second system I started the upload of the SQL databases to a new virtual disk on the main Xenserver cluster. After a short time I noticed this would take to much time, importing the OS disk alone would take over 4 hours, and I had about 2.5 hours left. So I switched tactics, I had a VHD which is the native virtual disk format for windows, including Hyper-V. so I installed Hyper-V on a lab system, connected one of the RAID1 Disks to it and created a new VM with the osVHD, it booted at the first attempt.

Now I only had to get the database files to the new VM, to do this I created a new VHD, mounted it locally and copied all databases to it. Next I attached the VHD containing the databases to the VM and we where back online.

At this point it was just before 7AM and some of my colleges were already entering the office, but the server was working again be it on one (fairly new) disk. Fortunately it was Friday, so it would only need to work for about 12 hours before I would have an entire weekend to migrate the VM to our Xenserver cluster.

That Friday evening I started working towards the migration to the Xenserver cluster. My initial idea was to prepare the VHD for direct import.

First order of businesses was to shrink the OS disk because it would consume 1 TB on the cluster while there was only 200GB in use. Before this could be done the disk needed to be defragmented, after defragmentation the partition could be shrunken down to around 400GB because there was a “unmovable file” in the way (Windows limitation). 400GB was still way better than 1TB but I had only shrunken the partition, the disk size was still 1TB and since Xensever will import the entire disk (even if there is no data on it) this would be a problem.

The process of shrinking the partition took about 4 hours, and required a trip back to the office, because I did not enable remote desktop to the hyper-V server (is was late alright).

luckily there is a solution for every problem. In the old day’s, up until Xenserver 6.2, there was a physical to virtual import utility available, and I still had a copy of it. Of-course it was no longer possible to directly import a “physical” machine to our Xenserver cluster, but it could still export an XVA (Xenserver Virtual Appliance).

After so many problems there was finely a stroke of luck, or not.. During the export the makeshift Hyper-V server gets a BSOD on a Realtek driver. The second attempt (Saturday evening) finishes without errors and the import starts Sunday morning around 8AM. The import finishes around 3PM (still Sunday) unfortunately the newly created VM outputs a BSOD: “STOP: c00002e2 Directory Services could not start….”. Which roughly translates to: “unable to read Active Directory database (NTDS.DIT)”.

After a lot of digging around , I notice the boot driveletter has changed from C to H. After changing the driveletter back to C by following this article form Microsoft, the server starts functioning again :).

At this point it is around 6PM on Sunday. The server is back up, and all applications are working again.

 

A lot of lessons can be learned from this experience, for example:

  • Relying on 7 year old drives for a critical core businesses application is not a good plan. The typical lifespan of a HDD is about 5 years, after this time we need to swap out drives of critical  systems.
  • New and old drives don’t always mix. Making the translation to virtual first and later trying to restore to physical is a good way to go, especially with older drivers.
  • A single physical server is not adequate for a critical core businesses application. Redundancy is important either on the Hypervisor level or at the application level.

image from: https://www.datarc.ru

 

First post (again)

After 2.5 years I’m back with a new blog, and its not black and orange (progress people!).

Seriously, after a long period of not having/making the time to write about day to day ICT problems (and solutions) and private projects, I’m back.

This blog will (just like the last one) contain a combination of installation/configuration guides from my lab setup, private projects and solutions to real life ICT problems witch are not found easily on the web already.

Okay, you got me, sometimes they are found easily on the web already but having a extra reference is always nice.

I hope to see you around!

images form: xkcd.com and skilledup.com