Last week PySide was launched, the team was glad to see the project finally go public and receive the community feedback, be it positive, negative or both. Many questions arose, like “Why duplicate efforts?” Well, I can’t say much more than what is already answered on PySide FAQ. For us (the team) the fact is that we had a task to accomplish and must perform it the best we can. That said, allow me to remind you that this is my personal blog and many of the views here written are my very own cherished opinion.

The other question that we’re waiting for, and my personal favorite, was “Why Boost.Python?”. Though one. First of all, Boost.Python eases very much the creation of C++ libraries bindings for Python. How to infer which method signature to call based on the Python arguments passed to the method wrapper? Boost.Python will take care of it. Inheritance? Type conversion (in opposition to type wrapping)? You bet: Boost.Python will take care of all this for you. The feature full Boost.Python gave us a great kick start and at first we progressed very fast. Occasionally some strange bug appeared and took some time to figure out the problem through the jungle of template error messages. Part of the job anyhow, and after that: fast pace again.

At some point somebody checked the size of the produced binary modules. “Hey guys, is that correct?”, “Ah, just strip the file.”, “Still huge.”, “Holy cow…”. Next task: size reduction. Some redesigns reduced a good deal of megabytes, g++ flags were also helpful, but these things weren’t enough. Then a new idea: “Let’s try it with the Intel C++ compiler and see what gives.” It gave binary modules with feasible sizes. Good, but the test just proved that it was possible to achieve the reductions. Besides, there were still other new ideas to try, and the fact that as soon as the project was launched the community would step in and say “I had this size problem with Boost.Python before. Here is how I solved it…”. (Which reminds me how limited, communication wise, a project is in its non-open phase. And don’t point your finger, mister — for every open source project has it’s non-open phase, even in your head!)

Part of the team was growing skeptical about the size reduction problem. Why not to try CPython code generation right now? Well, some say you can’t change the plane’s motor while flying, and this is true. Feature wise we were almost there and the reduction was possible. Also some of us had mixed feelings about CPython. In a past project a comparison was made about writing bindings with different technologies, including CPython, to check for speed, size and the burden imposed on the developer. At the end the guy with CPython had good numbers (not stunningly better than the others, at least for the case that mattered back then), but his personal impression was that he was suffering the Stockholm syndrome: he knew CPython abused him, but he developed a bond with his kidnapper.

Still, almost every one started personal (and voluntary, aka “made at home”) experimentation with different CPython generators (even a ctypes one!), and in the end all the ideas (including the ones from the Boost.Python generator) were merged into a single CPython generator, called Shiboken.


Before going on with this, allow me to explain that Shiboken means absolutely nothing. Not buddhist void, I just mean that the word Shiboken has no meaning attached to it. Except, of course, “generator of CPython based binding code for C/C++ libraries”.

Disclaimer: I don’t know a thing about Japanese language and the above kanjis are just something that I found at wikitionary to match the sounds of Shiboken. Forgive me, Lauro. 🙂

The conspirators’ plan was to develop the alternative generator to a point that could generate PySide bindings that pass all our unit tests, run the apps, etc, thus beign able to replace the Boost.Python front-end. For PySide users, i.e. Python programmers, the replacement would bring no impact, since the API should remain the same. The Shiboken generator is based on the same principles of the Boost.Python one: built using the API Extractor library, how the C++ library should be exported to Python is described on a Type System file, and so on.

Shiboken Generator

The power of Lego-fu!

When Shiboken reached a point that we’d think was good enough to start working with it at work, we presented it to our bosses and the green light was given. The Boost.Python generator will continue as the tool to generate the official PySide bindings, but with the parallel efforts we hope that Shiboken takes its place, the size reduction is achieved, and the Occam’s razor cut off the unnecessary entities.


Occam's razor demands that Boost.Python go

C/C++ Bindings

Another fix that we aim to achieve with Shiboken is to allow non-Qt C++ and C libraries to be wrapped with the generator scheme. The problem with the current Boost.Python generator is that even a non-Qt library wrapped with it will depend on Qt. Of course this is not the best we can do, but the fixing task had low priority since the PySide bindings are the main target of the work. For Shiboken we make it library agnostic from the start, specially because we do not get our chances trying to wrap the whole Qt from start: a test library with all the problems that could arise was made and is the source of all Shiboken unit tests.

Shiboken Features Worth Noting

Abandoning Boost.Python means abandoning some features already provided by it. One that is worth mentioning is the decision of which signature of a method should be called depending on the Python arguments passed on the call. For this to work we have to write a decisor that progressively checks the argument types until the correct signature is found. Just this? Of course not, the binding developer can use the Type System description to remove method signatures, remove/change the type of its arguments, even remove/change/set its default values! The method call decisor must take everything into account.

Overloaded Method

Debugging a multiple signature method call decisor is easier with the "dumpGraph" method.

No Boost.Python also means that would be harder to convert types and containers back and forth between C++ and Python. The template specialization technique was used to solve this one.

template <> struct Converter<bool> {
  static PyObject* toPython(ValueHolder<bool> holder) {
    return PyBool_FromLong(holder.value);
  static bool toCpp(PyObject* pyobj) {
    return pyobj == Py_True;

Why not generating “pyobj == Py_True” directly, you say? The above code scales better, since it will be the same for types that are composed of some primitive types inside containers inside containers, etc. Besides, the compiler could be counted on to inline short methods.

“They have a plan”

Right now Hugo (from PySide fame) started working on generating bare QtCore bindings without QObject and signals, then go to QObject. After that we should solve the signals problem, re-write the pieces of custom code and so on. I think after QtCore is completely done we can make a comparison with the one produced by the Boost.Python generator and see if the whole Shiboken idea can stand to its promise. The best should win for the honor and glory of open source.

We encourage everyone interested in the creation of Python bindings for C++ libraries to test Shiboken, report problems (we now have a component on OpenBossa bugzilla!), and tell us if something is missing for your library to work. Patches are always welcome as usual. 🙂

Shiboken on gitorious:

SQLite + Vala

Um dos efeitos colaterais do último post (e da insônia) foi que descobri porque não estava conseguindo usar os bindings Vala pra API C de SQLite. Nela existe a função exec, usada para executar statements SQL, que recebe como parâmetro uma função callback a ser executada após o término da query.

A função sqlite3_exec

sqlite3_exec(sqlite3*, const char *sql, sqlite_callback, void*, char**);

A assinatura da função callback em C é:

typedef int
(*sqlite_callback) (void*, int, char**, char**);

E no binding para Vala:

public static int
callback(pointer data, int n_columns, string[] values, string[] column_names) {

Estava tentando portar o código deste tutorial, para Vala, mas mesmo compilando terminava em segfault, mas antes disso mostrava uns dados totalmente errados.

Como você sabe bem, o compilador valac pode gerar código C, basta não usar a cláusula “-o outfile“: valac --pkg sqlite3 simplesqlite.vala, e pronto, são criados simplesqlite.h e simplesqlite.c, em lugar do costumeiro executável. Fiz isso e olhe só o que estava gerando:

demo_callback (gpointer data, gint n_columns, int values_length1, char** values, int column_names_length1, char** column_names) {

O problema é que o valac transforma arrays de Vala em arrays de C precedidos de um inteiro que informa seu tamanho. Você pode evitar esse comportamento avisando ao compilador da seguinte forma:

[NoArrayLength ()]
public static int
callback (pointer data, int n_columns, string[] values, string[] column_names) { (...)

Passei um tempinho até descobrir isso (a documentação de Vala está nas fases iniciais de produção), tentei [NoArrayLength], mas resultava em char**, precedido de int, mas este era setado para -1. Cheguei a relatar um bug (487612), mas já corrigi a gafe.


Conclui meu primeiro trabalho no INdT: um tutorial sobre como usar a Tapioca (mais especificamente tapioca-glib). Demorou mais do que devia, mas considerando que esse está concorrendo ao pior… não, mais complexo, mês da minha vida que me lembro (ou seja, dos últimos 2 anos), não foi um desempenho tão ruim (espero).

Uma idéia visual do que é Tapioca (mas o bom mesmo é ler o tutorial :).

Tapioca Diagram

E a aplicação exemplo é bem simpleszinha:

Tapioca Test Screenshot

LogicParser 0.7

Novo release do LogicParser, estou ficando quase satisfeito, mais alguns refinamentos e pode virar 1.0.

LogicParser 0.7LogicParser 0.7 Hosted on Zooomr

O problema da vez foi o “Ajustar para caber”, pois precisava mudar o zoom pra ficar de acordo com tamanho do Viewport, e este não tem um método óbvio do tipo “get_width”. A solução foi:

vp_width = GTK_WIDGET(lpApp->viewport)->allocation.width - 4;
vp_height = GTK_WIDGET(lpApp->viewport)->allocation.height - 4;

ou seja, todo GtkWidget tem uma propriedade “allocation” que por sua vez tem as propriedades “width” e “height”. (O -4 foi só pra compensar algum espaçamento.)

Além disso essa versão precisa desesperadamente ter seus memory leaks corrigidos, alguns zooms e logo terá 100Mb de memória ocupados. Estou protelando o uso do Valgrind.

Download: logicparser-0.7.tar.gz

LogicParser 0.6

Depois de curto tempo (umas 17 horas), mais um release do LogicParser. Agora é possível atribuir valores às proposições e o resultado da expressão completa é avaliado.

LogicParser 0.6

Para mudar o valor da proposição, basta colocar o cursor sobre o valor na lista da esquerda e segurar o botão esquerdo, então um combobox aparecerá com as opções de valores (True ou False).

Download: logicparser-0.6.tar.gz

Compilando LogicParser em Win32 com MinGW e MSYS

Minha primeira e emocionante compilação de algo que estava desenvolvendo no Linux no insalubre ambiente Win32. Primeiro o screenshot e depois a história:

LogicParser no Windows

O ambiente de build foi montado usando MinGW/MSYS/msysDTK e o GTK+/Win32 Development Environment. A primeira compilação deu errado, mas setei a variável de ambiente PKG_CONFIG_PATH para /c/temp/GTK/2.0/lib/pkgconfig e tudo correu muito bem, mais fácil do que esperava. Depois vou ver como compilar estático pra fazer um executável morbidamente obeso, mas muito conveniente.

A aba “Graph (Image)” não funciona pois não consegui instalar o GraphViz no computador do CIn pois não tenho privilégios de administrador. Pelo menos isso me ajudou a decidir usar o GraphViz via biblioteca em lugar de usar o g_spawn_sync da GLib para chamá-lo externamente. Além disso, pra esse negócio ficar mesmo cross-platform tenho de usar algumas funções utilitárias da GLib, como g_get_tmp_dir e coisas assim.

Referências úteis nessa empreitada: