wxGlade and Accelerator Keys

Currently I am using wxGlade 0.8.3 and wxPython 4.0.3 and I observe a strange behavior when using accelerator keys for menus.

In wxGlade I have to define them using a double back-slash, otherwise it won’t work:

wxGlade Accelerator Keys

The weird outcome is, my GUI afterwards looks like this:

wxPython menu with Accelerator Keys
Thus I need this Python code to fix my menu entries:

for mi in self.GetMenuBar().GetMenu(0).GetMenuItems():
  txt = mi.GetItemLabel()
  if "\\" in txt:
    mi.SetItemLabel(txt.replace("\\",""))

Hmmm

Advertisements

There is not such a thing as a stupid question

I hope we all agree on this: “There is not such a thing as a stupid question”. Only stupid answers exist a lot.

imageAnyway, my first reaction when reading that question on quora.com was: what a stupid question: “Can you write a program for adding 10 numbers?” !

I am still not sure though what the intension was behind that question, nevertheless it yielded a good and interesting discussion thread

  • with a lot of code samples
  • with a lot of fun, see this answer or this one
  • with some interesting discussion about how to phrase useful requirements for software development ( “what did he mean by ‘adding’ !”, see this answer )
  • with some interesting way to deal with those requirements: instead of firing up your IDE right away, think how else the problem can be soled; see this answer.

Thus, nice evidence that every question makes some sense, no matter how stupid it sounds initially.

image from pexels

 

Scrapy

Yesterday during another boring phone call I googled for “fun python packages” and bumped into this nice article: “20 Python libraries you can’t live without“. While I already knew many of the packages mentioned there one caught my interest: Scrapy. Scrapy seems to be an elegant way not only for parsing web pages but also for travelling web pages, mainly those which have some sort of ‘Next’ or ‘Older posts’ button you wanna click through to e.g. retrieve all pages from a blog.

I installed Scrapy and ran into one import error, thus as mentioned in the FAQ and elsewhere I had to manually install pypiwin32:

pip install pypiwin32

Based on the example on the home page I wrote a little script to retrieve titles and URLs from my German blog “Axel Unterwegs” and enhanced it to write those into a Table-Of-Contents type HTML file, after figuring out how to overwrite the Init and Close method of my spider class.

import scrapy
header = """
<html><head>
<meta content='text/html; charset=UTF-8' http-equiv='Content-Type'/>
</head><body>
"""
footer = """
</body></html> 
"""

class BlogSpider(scrapy.Spider):
 name = 'blogspider'
 start_urls = ['http://axelunterwegs.blogspot.co.uk/']
 
 def __init__(self, *a, **kw):
   super(BlogSpider, self).__init__(*a, **kw)
   self.file = open('blogspider.html','w')
   self.file.write(header)

 def parse(self, response):
   for title in response.css('h3.post-title'):
     t = title.css('a ::text').extract_first()
     url = title.css('a ::attr(href)').extract_first()
     self.file.write("<a target=\"_NEW_\" href=\"%s\">%s</a>\n<br/>" % (url.encode('utf8'),t.encode('utf8')))
     yield {'title': t, 'url': url}

   for next_page in response.css('a.blog-pager-older-link'):
     yield response.follow(next_page, self.parse)
 
 def spider_closed(self, spider):
   self.file.write(footer)
   self.file.close()

Thus, here is the TOC of my German blog.

I tried to get the same done with my English blog here on WordPress but have been struggling so far. One challenge is that the modern UI of WordPress does not have any ‘Older posts’ type of button anymore; new postings are retrieved as soon as you scroll down. Also the parsing doesn’t seem to work for now, but may be I figure it out some time later.

 

 

Project Jupyter

Project Jupyter is an open source project allowing to run Python code in a web browser, focusing to support interactive data science and scientific computing not only for Python but across all programming languages. It is a spin-off from IPython I blogged about here.
Typically you would have to install Jupyter and a full stack of Python packages on your computer and start the Jupyter server to get started.
But there is also an alternative available in the web where you can run IPython notebooks for free: https://try.jupyter.org/
This site does not allow you to save your projects permanently but you can export projects and download and also upload notebooks from your local computer.
IPython notebooks are a great way to get started with Python and learn the language. It makes it easy to run your script in small increments and preserves the state of those increments aka cells. It also nicely integrates output into your workflow including graphical plots created with packages like matplotlib.pyplot, and it comes with some primitive markup language to add documentation to your scripts.
The possibilities are endless with IPython or Jupyter – to learn Python as a language or data analysis techniques.
I was inspired by this video on IBM developerWorks to again get started with this: “Use data science to up your game performance“. And the book “Learning IPython for Interactive Computing and Data Visualization – Second Edition” by Cyrille Rossant is the source where I got this tip from about free Jupyter in the web.

Of course you can also sign up for a trial on IBMs Bluemix and start a IBM Data Science Experience project.

Just discovered: jsconsole.com

Just discovered jsconsole.com, an awesome way to quickly test out some javascript code.

So far I like jsFiddle to test out javascript code in the context of a html page and css styles, or cscript to run some javascript locally in a command prompt window.

jsconsole runs in your browser but works like a console, thus you just type or paste in javascript code and will see the output in the console. Like:

a = 1;
1
b = 3;
3
a + b
4
A nice feature is that you can paste in functions as well and execute those:
function addit(v1,v2) { 
  return v1+v2; 
}
addit(a,b);
4
A much nicer feature is that you can load any web page to make it available as a document in your javascript context:
:load www.google.com

Loading url into DOM…

DOM load complete

You can also use that “:load” command to load any external scripts, or a javascript framework like jquery:

:load http://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js

Loading script…

Loaded http://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js

Now we have the google page available as a document and jquery as a library we can easily find out ( programmatically ) the text of the two buttons on that page:

$("input").each( function() {if ($(this).attr("type") == "submit") { console.log($(this).attr("value")); }});

"Google Search"

"I’m Feeling Lucky"

Isn’t hat just … wow !?

Flowcharter 0.1 alpha

Last week in my drawer I discovered a very early version of Visio or ABC Flowcharter:

I probably got this in 1980 when I joined IBM. Boy, did we really draw flow charts on paper for software we wrote ? Actually we did during the programming classes I went through at the beginning.

For my diploma theses work I wrote in 1983 with Script on a mainframe computer I had to come up with some ASCII art to enrich my paper with flowcharts:

IPython and lxml

I have been playing a bit with ipython and lxml these days.

IPython is a powerful and interactive shell for Python. It supports browser based notebooks with support for code, text ( actually html markup ), mathematical expressions, inline plots and other rich media. Nice intro here:

Another nice demo what you can do with ipython actually is the pandas demo video here.

Several additional packages need to be installed first to really be able to use all these features, like pandas, mathplotlib or numpy. A good idea it is to install the entire scipy stack, as described here.

I did the installation first on my windows thinkpad and later on on a Mint Linux box.

This is some work to get thru, like bumping into missing dependencies and installing those first, or try several installation methods in case of problems. Sometimes it is better to take a compiled binary, sometimes using pip install, sometimes fetching a source code package and going from there.

I finally succeeded on both my machines. Next step was to figure out how to run an ipython notebook server, because using ipython notebooks in a browser is the most efficient and fun way to work with ipython. For Windows there are useful instructions here, on my Linux Mint machine it worked very differently, working instructions I finally found here.

After that I developed my first notebook using lxml, called GetTableFromWikipedia, which basically goes out on a wikipedia page ( im my case the one about Chemical Elements ) and fetch a table from there ( in my case table # 10 with a list of chemical elements ), retrieves that table using lxml and xpath code and converts it to csv.

The nice thing about ipython is that you can write code into cells and then just re-run those cells to see results immediately in the browser. This makes it very efficient and convenient to develop code by simply trying, or to do a lot “prototyping” — which sounds more professional.

Having an ipython notebook server running locally on your machine is certainly a must for developing a notebook. But how to share notebooks with others ? I found http://nbviewer.ipython.org allowing to share notebooks with the public. You have to store your notebook somewhere in the cloud and pass the URL to the nbviewer. I uploaded my notebook to one of my dropbox folder and here we go: have a look ! Unfortunately it is not possible to actually run the notebook with nbviewer ( nbviewer basically converts a notebook to html  ).

My notebook of course works with other tables too, like the List of rivers longer than 1000 km, published in this wikipedia article as table # 5.