How to manually update VirtualBox Guest Addons

After upgrading VirtualBox on your system it is highly recommended to also update VirtualBox Guest Addons for your guest systems.
This should work automatically but typically doesn’t, at least in my case; just happened to me again after upgrading to VirtualBox 5.2.8.

How to update VirtualBox Guest Addons is described here:

In the “Devices” menu in the virtual machine’s menu bar, VirtualBox has a handy menu item named “Insert Guest Additions CD image”, which mounts the Guest Additions ISO file inside your virtual machine.

This did not work in my case: when clicking “Insert Guest Additions CD image” I get an error message which is not very specifc, it just says the desired CD image could not be mounted. A closer looks reveals that actually that image is mounted already, it just did not start automatically to perform the update.

Here is what I do in this case:

  1. Open a termin window
  2. Type ‘df -h’ to find the path to my mounted CD image. In my case that entry looks like this:
    /dev/sr0 56M 56M 0 100% /media/amagard/VBox_GAs_5.2.8
  3. cd to this path ( here: /media/amagard/VBox_GAs_5.2.8 )
  4. Type ‘sudo ./’

This actually kicked off the installation of the VirtualBox Guest Addons.


Jupyter: Plotting pivots & changing legend entries

A while ago I blogged about project Jupyter and in the last days I have been working a lot with it and I am still fascinated by its power.

Today I faced and solved two challenges I like to share here:
. plotting a pivot table
. changing legend entries

Assume we have the following dataframe:

Creating a pivot is a piece of cake by using the pandas pivot_table method on that dataframe:

pivot = pd.pivot_table(df,index=["Org"],values=["Male employees","Female employees"], 


This gets us
. the number of departments per org ( = len Female employees or len Male employees )
. the sum of male and female employees per org ( = sum Female employees and sum Male employees )
. as well as mean, min and max

How to plot ?
We can simply save the pivot tables as a new dataframe ‘pivot’ and call its plot method. Let’s say we want to plot sum of male and female employees per org. First we need to drop the other statistics from the pivot table we don’t need for the plot. Then we plot:




Only problem here is that the legend entries of this plot look a bit cryptic. Here is some code to fix this:


ax = plt.gca() 
handles,labels = ax.get_legend_handles_labels() 
new_labels = [] 
for l in labels: 
ax.legend(handles, new_labels)


I have shared the entire notebook here.

How to print ipython notebooks without the source code

This is something I really need to create sort of standard reports based on ipython notebooks which should not contain the source code and input prompts of ipython cells: the capability to print ipython notebooks without the source code.

There are ways to do that as discussed here on stackoverflow but all these methods involve adding some ugly code to your ipython cells or tweaking the way the ipython server is started ( or running nbconvert ) which might be out of your control if you use some cloud offering like Data Science Experience on IBM Cloud and not your own ipython installation.

Here is how I achieve this:

I simply download my notebook as html.

Then I run this python script to convert that html file so that prompts and code cells are gone:

FILE = "/somewhere/myHTMLFile.html"

with open(FILE, 'r') as html_file:
    content =

# Get rid off prompts and source code
content = content.replace("div.input_area {","div.input_area {\n\tdisplay: none;")    
content = content.replace(".prompt {",".prompt {\n\tdisplay: none;")

f = open(FILE, 'w')

That script bascially adds the CSS ‘display: none’ attribute for all divs of class ‘prompt’ or ‘input_area’.

That tweaked html page now easily can be printed into a pdf file for me to get my standard report without any code or input prompt cells.

If you know what you are doing you can add more CSS tweaking, like e.g. this one, to that Python code:

# For dataframe tables use Courier font family with smaller font size
content = content.replace(".dataframe thead","table.dataframe { font-size: 7px; font-family: Courier; }\n.dataframe thead")

To figure out things like that I used Firefox Inspector to determine class names of DOM elements ( like e.g. ‘div.data_frame’ is used to display dataframe tables in ipython ) and some CSS knowledge to achieve the manipulations I find useful, like reducing the font size of tables in order to make them fit on pages printed with portrait orientation.

How I installed Teamviewer 12 on Linux Mint 18.2


Here is how I installed Teamviewer 12 on Linux Mint 18.2 64bit:
1. Downloaded Debian package from here
2. Located file in Downloads folder, right-click, “Open with GDebi Package Installer”
3. Ran into issue about missing dependency: libdbus library
4. Ran this as recommended elsewhere, didn’t help: sudo apt-get install -f
5. Found this useful discussion thread and basically ran the first two commands recommended, then re-attempted install through package manager successfully.

sudo dpkg --add-architecture i386
sudo apt-get update



Yesterday during another boring phone call I googled for “fun python packages” and bumped into this nice article: “20 Python libraries you can’t live without“. While I already knew many of the packages mentioned there one caught my interest: Scrapy. Scrapy seems to be an elegant way not only for parsing web pages but also for travelling web pages, mainly those which have some sort of ‘Next’ or ‘Older posts’ button you wanna click through to e.g. retrieve all pages from a blog.

I installed Scrapy and ran into one import error, thus as mentioned in the FAQ and elsewhere I had to manually install pypiwin32:

pip install pypiwin32

Based on the example on the home page I wrote a little script to retrieve titles and URLs from my German blog “Axel Unterwegs” and enhanced it to write those into a Table-Of-Contents type HTML file, after figuring out how to overwrite the Init and Close method of my spider class.

import scrapy
header = """
<meta content='text/html; charset=UTF-8' http-equiv='Content-Type'/>
footer = """

class BlogSpider(scrapy.Spider):
 name = 'blogspider'
 start_urls = ['']
 def __init__(self, *a, **kw):
   super(BlogSpider, self).__init__(*a, **kw)
   self.file = open('blogspider.html','w')

 def parse(self, response):
   for title in response.css(''):
     t = title.css('a ::text').extract_first()
     url = title.css('a ::attr(href)').extract_first()
     self.file.write("<a target=\"_NEW_\" href=\"%s\">%s</a>\n<br/>" % (url.encode('utf8'),t.encode('utf8')))
     yield {'title': t, 'url': url}

   for next_page in response.css(''):
     yield response.follow(next_page, self.parse)
 def spider_closed(self, spider):

Thus, here is the TOC of my German blog.

I tried to get the same done with my English blog here on WordPress but have been struggling so far. One challenge is that the modern UI of WordPress does not have any ‘Older posts’ type of button anymore; new postings are retrieved as soon as you scroll down. Also the parsing doesn’t seem to work for now, but may be I figure it out some time later.



Project Jupyter

Project Jupyter is an open source project allowing to run Python code in a web browser, focusing to support interactive data science and scientific computing not only for Python but across all programming languages. It is a spin-off from IPython I blogged about here.
Typically you would have to install Jupyter and a full stack of Python packages on your computer and start the Jupyter server to get started.
But there is also an alternative available in the web where you can run IPython notebooks for free:
This site does not allow you to save your projects permanently but you can export projects and download and also upload notebooks from your local computer.
IPython notebooks are a great way to get started with Python and learn the language. It makes it easy to run your script in small increments and preserves the state of those increments aka cells. It also nicely integrates output into your workflow including graphical plots created with packages like matplotlib.pyplot, and it comes with some primitive markup language to add documentation to your scripts.
The possibilities are endless with IPython or Jupyter – to learn Python as a language or data analysis techniques.
I was inspired by this video on IBM developerWorks to again get started with this: “Use data science to up your game performance“. And the book “Learning IPython for Interactive Computing and Data Visualization – Second Edition” by Cyrille Rossant is the source where I got this tip from about free Jupyter in the web.

Of course you can also sign up for a trial on IBMs Bluemix and start a IBM Data Science Experience project.

Victim of some Facebook Phishing

Facebook.jpgToday I became a victim of some Facebook credentials phishing. I received an instant message from one of my Facebook contacts containing a video. When trying to play the video I got prompted to enter my Facebook credentials. After having done this … my credentials went into the wrong hands. And it became obvious that this video was not from my contact.
This happened on my smartphone. I believe on a PC this never would have happened to me because there are many means to cross-check urls and links and other things to detect a phishing. On a mobile device it is much harder. The login screen really looked authentic.
The result was: many dubious videos sent to all my contacts. In the meantime Facebook right away locked my account because they detect suspicious behavior. I also ( too late ) read the warning from my contact in Facebook from whom I had received the malicious message that her account had been compromised.
I unlocked my Facebook account by setting a new password and acknowledging a confirmation code; Facebook did a quiet good job to detect the problem and take me through steps to resolve. I then posted warning on my Facebook page and also sent warning messages to most of my contacts; luckily I have less than 100 Laughing
Interestingly my Chrome browser on one of my laptops later on insisted in downloading a Malicious Software Removal tool from Facebook, which right away was blocked by my virus scanner. This happened while Facebook was working fine in my Firefox browser. I found this very helpful hint here ( see comment # 3 in this lengthy article ) how to overcome this strange means and enable Facebook again in my Chrome browser.