IFF catalog add-on for Haiku Locale Kit

Posted by pulkomandy on Sun Oct 15 15:58:38 2023  •  Comments (0)  • 

I made this as part of my work on porting ACE to Haiku. ACE originates from MorphOS, and so it uses "catalog" files for internationalization in the format used there. The format is based on the IFF container typical on Amiga originated software.

Since the Haiku version of ACE uses pretty much exactly the same strings as the MorphOS version, it would be silly to have to re-translate all the strings. So I made this add-on to Haiku locale kit to handle catalogs in the MorphOS format.

At the moment this is not useful to any other software than ACE. However, maybe someone will find it useful. The format is a bit different than what is done in Haiku. In the Haiku locale kit, the original (usually English) string is used as a key for translations. This requires hashing the whole string, which makes everything a bit slower. It isn't very noticeable on modern machines, but it is definitely part of what makes Haiku slower than BeOS. On the other hand, the format used on MorphOS assigns an integer to each string, which can be used to refer to it efficiently. It still provides fallback strings inside the executable, so if there is no catalog, things will still work.

Maybe we could use this format more on Haiku and get better performance. But it would require a lot of changes to the tooling built around the existing format. Maybe a project for later...

You can get the iffcatalog sourcecode:

If you want to simply use it, the binaries are available in Haiku package manager.

Version history:

  • 0.1 - 04/2015 - First public release
  • 0.2 - 11/2017 - Skip '\0' in translated strings (used for keyboard shortcuts on MorphOS)
  • 0.3 - 06/2020 - Fix catalog search directory
  • 0.4 - 02/2022 - Add a python script to generate locale_strings.h from catalog definitions
  • 0.5 - 10/2023 - Remove dependency on libtextencoding, which is not safely usable in a Haiku add-on unless the host app links with it as well. Fix a bug in the locale_strings.h generator with handling of escape sequences.

Personal notes on the smallweb

Posted by pulkomandy on Sun Oct 15 14:10:04 2023  •  Comments (0)  • 

Lately I have taken part in some fediverse discussions about the "small web". As you probably know, many websites these days are bloated. They will load several megabytes of resources. I remember surfing the web as a kid, I would download some games and shareware, and I would be annoyed when one of them would be larger than 1MB, as that would take a while to download. Now, a lot of webpage (just the webpage, not the downloaded software!) are already 10 times larger than that.

Yet, they do not seem to offer much more than they did 25 years ago. A web page mainly shows text with some pictures where relevantm or just to make it look nice. So, what happened? Well, I don't really know. If anything, my website is getting simpler and smaller over the years. Early versions had MIDI music and a Java applet emulating the Start Wars opening scroll on the landing page.

Anyway, so, the "small web". It seems this idea started gaining traction in the last few years. It comes from multiple directions: retrocomputing is trendy, and, well, it always was, but now, there are decidedly retro machines that had their commercial life during the early days of the worldwide web. So there is demand from retrocomputer users who want to browse the web like in 1996 or so. Another nostalgic aspect is the "what if?" exploration of Gopher, an early competitor to the web that largely lost to it. And there is also the concerns caused, I guess, indirectly by climate change, of building a more sustainable tech that's less power hungry. Surely, simpler websites and web browsers could be part of that, as well as being genuinely useful to people who have a limited internet access for various reasons (living in very remote areas, not willing or being able to pay for a super fast connection, or being in a place where the infrastructure just isn't there for many possible reasons).

One thing that is somewhat succesful in that domain is the Gemini protocol. This is inspired by Gopher but made more modern and a little less quirky, for example, it requires SSL where Gopher was mainly plaintext. Unlike HTML, it gives very limited control on the text formatting to people writing webpages. As a result, it is quite easy to write a Gemini browser, even one working in a command-line, and indeed there are quite a few of them. I had only a brief look at it, and decided it doesn't work for me for various reasons (this is my personal reaction to it, not a complaint or so, I'm sure other people are happy with these choices and that's good for them). The SSL requirement makes it unreachable for the 8-bit retrocomputers I care about. The limited formatting is often not enough for what I want to do with the web, and likewise, I find the lack of cookies, URL parameters and the like a bit of an extreme measure. I understand the idea of enforcing privacy, but I am not convinced it is worth the limitations that result.

But, my main problem with it is: it is new technology. There is a new protocol, a new page formatting language, new specifications, and new software to write. I get it, a lot of people will find these new things exciting, it's fun to write software and specs from scratch, without having to care about legacy quirks and past mistakes. This is not how I usually think about things. As I learnt in school, a good software developer is a lazy software developer. So, my approach is often trying to make most use of what's already available. In this case, this means the existing web browsers. That's a good idea because we already have a lot of software, a lot of people who know how to write webpages, and a good understanding of the problems and limitations. And there are also users (people browsing the web) who won't need to be convinced to install yet another piece of software on their computers. Can we achieve that? I don't know, but I find that idea a lot more interesting to explore than the "what if we erased everything we have and started over?" approach of Gemini.

So, I took a look at the existing specifications for the web. My goal here is to see how to make websites that are "light" (a webpage should be a dozen to a hundred kilobytes or so), fast to load, and yet allow websites to have some visual personality. The latter can't really be done in Gemini and I think it's an important part of the web: allowing everyone to have their own space, and really make it reflect who they are. There will be the wall of text just using the default formatting. The cute page with colorful text and animated GIFs. The red-text-on-black-background crazy conspiracy theory website. And so on.

From the technical side, let's try to limit ourselves to the technologies already widely supported by web browsers. Or, to word it differently: I don't think the bloat of the web is a technical problem to be solved by throwing more tech at it. Instead, it's just a matter of making better and more efficient use of what we already have. The core set of technologies is in fact quite reasonable. On the protocol side, we have a choice of at least HTTP 1.0 and 1.1, possibly HTTP 2, possibly some other more experimental protocols. I'm not sure HTTP 2 complexity is really useful for such small websites, 1.1 will probably be good enough.

More interesting is the content format. Here, we can review the different available options. The oldest one is HTML 3, which does not have CSS and mixes up content and presentation. This is not great by today's standards and does not really make things simpler. The newest one is HTML 5, which is in a kind of "rolling release" mode and keeps changing all the time. Personally I find this to be a problem, I would prefer a fixed set of features.

So, that leaves us with HTML 4. Which turns out to be really interesting, because it is from a time where mobile phones were starting to go online, and so, there was a simultaneous demand for richer webpages on desktop machines, and really simple ones for then limited mobile phones and other similar devices. Also, it is from a time where the web attempted to move to XML with XHTML 4. This seems to still be controversial: XHTML is a much stricter syntax, while HTML 4 retains the more flexible way of working of HTML3 and later HTML versions. Basically, whatever you write in a text document, a web browser can parse it as HTML. It is very tolerant on unclosed tags, weird nesting of things (putting a paragraph inside a table inside a list item), and so on. XHTML, on the other hand, is allowed to reject a page as invalid XML. Well, it doesn't really matter anyway, existing browsers already accept both, so we can use both depending on the context. If you are writing a static website generator, you can probably make it generate valid XHTML. If you are manually editing webpages or allowing your users to do so, maybe HTML will provide a less frustrating experience? I'm not sure about that, in my early days of writing HTML and Javascript I think I would have preferred to have clear error messages instead of a browser that always did something, but rarely exactly what I wanted.

Anyway, with both XHTML and HTML4, one interesting aspect that the w3c worked on is modularity. You can pick a profile like XHTML Basic, which gives a set of recommendations for which tags should be used or not. The selected set seems reasonable for my use of the web, it does not seem very constraining to me. Likewise for CSS, you can easily decide to limit yourself to "level 1" or "level 2" features. Or at least, you can make sure your website "degrades" to an acceptable rendering on browsers that support only these, while making use of more advanced features on browsers that can handle it.

Finally, we have to talk about javascript. Javascript is not a great language. We know why that is: it was designed in a very short time with requirements along the line of "Java is trendy, but objecto riented programming is too complicated, can you make us a version of Java without objects? We need it in two weeks". As you expect, the result is not great, and the language did get objects later on anyways. Well, unfortunately, we don't really have a choice. So, my approach to this is to limit Javascript usage to the minimum possible, and make sure my websites work well on browsers that don't parse Javascript. But I still find it useful, for example for client-side pre-validation of forms. It is useful to have immediate feedback on required fields in a form, and not discover problems only after you have submitted it. That kind of thing.

Also, Javascript allows to set up things like "AJAX" (OK, no one calls it this way anymore), basically, you can have dynamic webpages where just one part of the page is reloaded or generated by Javascript code from a loaded network resource. Sure, this makes the browser quite a bit more complex, and the webpage will need some processing. But it doesn't necessarily have to turn into megabytes of bloat. In fact, if used well, this can be very efficient in terms of bandwidth, since the whole webpage does not need to be reloaded. So, I don't like the idea of banning Javascript. Just make its use minimal, and, where it makes sense, have a fallback for browsers without Javascript.

Finally, one thing that is a bit unused in the current web is RSS feeds. Or, more broadly, the idea to expose website data in a format that's easy to parse and process by applications. I thnk this is a path that deserves to be explored more. Not only RSS, but also more generally exposing any type of data in a well structured XML, and maybe using XSLT to process it into other formats. This is well supported in most web browsers, and certainly could get more use. I like the idea of websites exposing data as XML that can also be easily downloaded and processed elsewhere (by some other website, or by a "native" app on the user's computer). This is kind of a tangent to the small web, but maybe there's an opportunity to build something new and exciting here.

So, what will I do, then? Well, not much. This website should be XTML-1.0 basic compatible and uses no Javascript, because it doesn't need any more than that. It does use some modern CSS features, but even with CSS disabled, you can read the text in the articles and use the website in an acceptable way. I hope this inspires other people to do the same. Maybe if more and more websites are built like this, the bloat of the big modern web3 ones will stand out more, and people will notice and complain about it?

Down the rabbit hole: I just wanted to write a videogame

Posted by pulkomandy on Sat Jul 1 17:02:59 2023  •  Comments (0)  • 

It is one of these weeks where nothing goes as planned.

Rabbit hole level 0: I wanted to write videogames

For some time now I've been part of a team trying to better document the VTech V.Smile console and make it easier to write games for it. They contacted me because I had some experience (and blog articles) about other VTech hardware.

The current efforts are documented in my VTech wiki. Most of the work was already done several years ago: datasheet and schematics were found, hardware was documented, games were dumped, emulators were written. But there was no documentation and no opensource tools to build new games, or at least, nothing quite production-ready. The only option would be the compiler suite provided by the CPU manufacturer (this is a custom CPU core, used in a few other game consoles).

So, after writing the documentation in the wiki, I started to experiment with writing an assembler and compiler. I initially started looking into vasm and vbcc, because my experience with these in the past had been rather good. The developers are helpful and the code is understandable by me and designed to make adding more CPU architectures easy.

Rabbit hole level 1: I need a C compiler

I quickly ran into problems with vasm, however. The CPU in the V.Smile is a purely 16-bit thing, which means it can't address individual bytes. While vasm has some support for this in the code, it was never used, and in fact, does not work. I discussed this with the vasm developers, and the solution they suggested is that all addresses in the assembler code should be prefixed with some special character, and the assembler frontend can multiply or divide them by two as needed.

I looked into that, but decided it would make writing assembler code more complicated and annoying than needed, as there is a risk of forgetting the marker and suddenly having your address all wrong.

On vbcc side, I did not have much problems, the porting guide is very complete and there is not too much work needed to get a basic version of the compiler running. But, without an assembler, it is not very useful. I did some experiments with Mikke Kohn's naken_asm, which has support for the UNSP CPU used in the V.Smile, but it is a simple assembler that can only directly generate a final binary. It has no support for temporary .o files and a linker. So in my tests I had to let the compiler generate assembler files and not assemble them, and also generate them in a way that they could be concatenated together at "link" stage before being assembled into a binary.

I got this to work for simple cases, but it is not great to work this way.

Rabbit hole level 2: I need an assembler and linker

I let the project sit for a while (I think it's been about a year?) hoping that someone else would do it(tm) or I would find a more suitable assembler somehow. I looked into ASXXXX, but this looks somewhat limited and not super easy to port.

So, eventually, I decided, if I'm going to port somethign not super easy, I may as well go for the Real Thing, and port GNU binutils. My research showed me that there is a porting guide, even if it's fairly short. And I think I have spent enough time doing low-level stuff (compilers, assemblers, wriing linker scripts, baremetal programming on AVR and ARM) that this should be within my reach. And so I cloned the git repository and started following the guide.

After just a few hours, I had something compiling and generating various executables: assembler, ar, objdump, etc for my architecture. I don't expect any of these to actually work, I started by just filling in empty functions and adjusting the buildsystem to get it to compile all things. The idea is then to run each of them, find what doesn't work, and add the missing functions as I go.

Binutils comes with a test suite, so I thought I would start by running that, look at all the failing tests, and fix them by adding bits of the code for my port, looking at how it's done for other CPUs.

Rabbit hole level 3: running the binutils test suite

This doesn't look too complicated: install the needed software, run "make check", investigate and fix bugs, and repeat.

So I went ahead and installed DejaGNU and expect which form the base of the testing framework. I then ran "make check" and… the testsuite immediately failed.

I had not heard of DejaGNU before, it seems to be a set of extensions to expect used to run tests on cross-development environments, typically, compile software on one computer, run it on another, and check that the results are as expected. I am not sure if anyone else uses it outside of binutils and gdb.

In any case, it is written in expect, which itself is written in TCL. And in the binutils case, it is also intertwined with the binutils build system which is written using autotools (and a specific version of it).

Rabbit hole level 4: learning to use expect

So my next step was trying to run a simple "expect" program. I quickly found that expect was completely broken, and it was a known problem with a bugreport opened at Haikuports since 2021. I have not mentionned that I am doing all this using the Haiku operating system, I would not run into these problems if I had chosen a more stable and finished operating system. But where would be the fun in that?

Anyway, so expect doesn't know how to open a PTY to communicate with another process (which is the main thing it is designed to do: spawn a process, read its output, match that with some regular expressions, and reply with some input according to a script).

A quick look at the code and buildsystem helped me find the problem: expect can handle many ways to open PTYs, and on Haiku, the preferred one was not picked because it requires linking an extra library that the expect configure script could not figure out. I quickly fixed that and… immediately hit another bug.

Rabbit hole level 5: coreutils

Now expect would correctly open a PTY, but it would fail to configure it. I once again dug into the sourcecode and found that it does this by running "stty sane" using the system() call. So I ran that same command in my shell, and indeed was greeted with the exact same error message.

Quick sidenote: I found the use of "stty sane" using strace and looking for calls to the exec system call. This almost didn't work: support for printing the command line of the executed command for exec in strace was added in Haiku by another developer just 3 weeks ago. So that's one rabbit hole jumped over, yay!

stty is a standard command provided by GNU coreutils (in Haiku at least, other operating systems may have their own version or one written by someone else under a different license or using a different programming language).

The expectation is that coreutils will detect and check a lot of things about the OS in their configure script while building, and compile the tools in a way that works for each system. But, they didn't handle the case where termios.h defined speed_t to be an unsigned char type. They are trying to set speed_t variables to -1 and later compare them equal to -1, and due to integer promotion rules in C, this is not the case. If someone is trying to tell you Javascript makes no sense, if you want them to go away, tell them about C integer promotion rules.

Anyway, I added the missing type cast, and stty started working. I thought I was finally ready to go one level up the rabbit hole towards the surface. I was wrong.

Rabbit hole level 4-and-a-half: expect again

I installed my newly built coreutils on my system, ran expect again, tried to run a child process, and this time, not only expect would start, but I managed to read the output from the launched program.

I then returned to the binutils test suite and ran 'make check' again. This time, it ran 2 tests, and the 3rd one made it stop waiting for something. I was a bit annoyed, not only because I had already fixed more bugs than I wanted to, but also because I was not too sure which part of the stack was wrong this time.

Eventually I found how to enable expect debug mode, and found which command it was running. I confirmed that the same command, ran standalone, returned immediately and with the correct results. So that wasn't a problem and I turned my attention to the test framework.

I studied the DejaGNU script for the failing test, and, while it took some time to peel all the layers, eventually I found that it was something quite simple: run 'ar' with some arguments, wait for the command to complete, and then check the output file. The failing part was 'wait for the command to complete'.

After some more experimentation with expect, I wrote a two line script that reproduces the issue. I ran it on Linux and confirmed that it has no problem there. Since that script is short, here is a copy of it:

spawn echo
expect eof

So basically, we start the 'echo' command and wait for it to terminate. And expect doesn't noticed that it terminates. 10 seconds later, there is a timeout (that doesn't happen in the coreutils tests because they set the timeout to 300 seconds instead of 10).

I turned to strace again, but I could not see a lot more. I also tried to follow the code in expect and in the tcl interpreter, but I quickly got lost. So I opened a support request on the expect bugtracker describing my problem, and went to sleep.

The next day, I had some answers from expect developers, mainly suggesting things that I had already tried but not included in my short ticket, so I shared the info (strace output) with them. And my fresher brain after a night of sleep also helped looking at things in more details. I know that expect uses a PTY to communicate with the spawned process, and so I decided to write a simple test program to do something similar with less "moving parts" involved: spawn a child process attached to a PTY, let it exit, and verify that the parent process waiting on the other side of the PTY is notified that the child is done.

Rabbit hole level 6: PTYs and poll

So I picked an example of PTY usage and started modifying it to my needs. And, I could easily reproduce the problem. Once again I made sure to run the program on Linux and Haiku to compare outputs. On Linux, when the child process exits, the PTY is closed and the poll in the parent process is notified. On Haiku, this does not seem to be the case, and so this program remains locked waiting forever. However, removing the poll call, a read call does not block, and properly returns an end of file. So it is just a problem of notifying the process waiting on poll that the file descriptor it is waiting on is now closed.

Now the next step is to fix that bug in Haiku. And, even if I do that, I don't know if it will also fix the problem in expect, as I was not able to find where in Tcl the waiting for file descriptors is handled.

So, as of now, I don't know if this rabbit hole has more rooms for me to explore, or if I will find my way up at least one level. Maybe I will lose interest in this and do other things for a few months before I get back to it. And probably I will uncover many more rabbit holes.

Conclusion

For people who think Haiku should not be in "beta" releases, I hope this helps you understand what we mean when we tell Haiku is not finished. It is not a safe ground to build any software on. Sure, a lot of commercial systems don't do any better, or didn't in the past, but still, the other options currently available aren't that bad nowadays. And not everyone is willing to get depp into these things like I do.

For people who wanted to use my C compiler to port games to the V.Smile: well, if you don't run Haiku, you can stay compfortably at level 1 or 2 of this rabbit hole and still be of help. If someone else was porting this assembler and compiler, I wouldn't need to run the binutils testsuite and all the deeper levels could be skipped. For now, at least.

For myself: sometimes it feels like I'm making no progress, but that's not true. It's just a lot of work in directions I didn't initially plan to go in. And such things are probably helpful for future projects as well. Also: I am surprised there were not more complaints about expect not working, and about PTYs being broken on Haiku. I thought these would be used a bit more often in typical UNIX toolchains?

BGhostview postscript viewer for Haiku

Posted by pulkomandy on Sat Apr 29 13:27:30 2023  •  Comments (0)  • 

Today I released version 1.0 of BGhostView, a postscript document viewer for Haiku.

Screenshot of BGhostView, showing some USB document from Openboot specifications

This software started in the late 90s as a port of a postscript viewer from UNIX/Linux to BeOS. Back then, Ghostscript did not have a cross platform API, and the BeOS port had to work with a patched up version of the Windows GSDLL API, heavily modified to run on BeOS.

I started working on it because in my past attempt to port Haiku to SPARC machines, I found that a lot of the documentation for these was distributed as PS files instead of the now more typical PDF. At the time, no version of ghostscript was available for Haiku. So I started digging and found an old port of Ghostscript whic provided a starting point, and this viewer I could use it with. But it wasn't working very well.

I then found that Ghostscript now has a slighly better API, and I could make use of that instead. So now BGhostView is running with an up to date version of Ghostscript (thanks to other people who also have postscript interpretation needs on Haiku, this was not taken care entirely by me).

I had not touched BGhostview since 2019, but I got reports that it was crashing recently. So, this week I dug into the code again and made some fixes and updates and decided to make a 1.0.0 version for all to enjoy. It is certainly not yet perfect, but for the basic needs of viewing Postscript documents, it should be fine.

This is yet another one of these applications that is currently hosted by HaikuArchives on Github, meaning it is more or less open for many people to contribute to, but left without someone to really take care of it and move it forward. Well, I guess that can be me when there are bugs to fix, but I probably won't have time to manage the larger refactorings and cleanups that would be needed: converting the UI to Haiku layout system so that it can automatically scale for High DPI displays, reindenting and reformatting all the sourcecode (it's very inconsistent at the moment, I guess most of it was written without a proper code editor that would watch the indentation a bit?), and reviewing the Ghostscript integration code to make better use of the APIs available in modern versions. Postscript isn't exactly great for that kind of usage, just figuring out how many pages there are in a document turns out to be a somewhat tricky problem.

There's also the question of wether we want a separate viewer for each document format. Wouldn't it be nice if the same viewer could do both PDF and PostScript? And what about DVI and XPS and FOP? And maybe docx and opendocument while we're at it? Could we use translators for this, so we can write the GUI once and then have all formats added to it later on? That would certainly be a cool project, but I already have many other things on my TODO list, so, not for now...

Get the sourcecode here

If you just want to run BGhostView on Haiku you can simply install it from HaikuDepot as you'd usually do.

Developer console

Posted by pulkomandy on Mon Apr 10 15:45:46 2023  •  Comments (0)  • 

I wrote new software today! Well, sort of.

This does not happen very often. A lot of my software work is fixing and improving existing code, and not writing new things. Maybe because I'm a bit lazy and I find it easier to fix some bugs in existing code than having to start from a blank page. It provides easier reward for me (that may not be the case for everyone, digging into an existing codebase is a learnt skill).

Anyway, so, the story is, somewhat recently (ok, actually, it's already more than a year ago), I got a new laptop. This was an opportunity to do an install of Haiku from scratch, and while doing so, I decided to go with the 64bit version. The only limitation in the 64bit version of Haiku is that it can't run 32bit software, including several applications that were compiled two decades ago for BeOS, and for which the sourcecode isn't publicly available.

I didn't think I was needing any of these applications, as, over the years, a lot of them have either been open sourced (mainly thanks to the effort of the Haiku Archives project in collecting such software and reaching out to the original authors to get the sourcecode published and/or relicensed under opensource licenses), or has been replaced by newer software or rewriten.

It turns out, one piece of software I occasionally use had not been through this yet. And so I got to work on rewriting it.

The software in question is BeDC. No, not the Direct Connect client of that name, but the rewrite of the earlier "Developer Console" (I think? it is unclear how BeDC and Developer Console are related). The idea is quite simple: this application receives text messages from other applications and logs them in a window. I discovered this software while working on PadBlocker, an input filter add-on that disables the touchpad while the keyboard is being typed on. Because of the way input server add-ons work, it is not easy to grab their output in the common way (sending it to stdout) because they run inside of input_server, which normally does not have its stdout route to anywhere.

I thought the idea was interesting and started using the app in some of my other projects, mainly in WebKit, where it was useful to collect logs from the different processes started by a single web browser instance and clearly marking where each message comes from, and in Renga, an XMPP client, as a way to log incoming and outgoing XML messages for debugging.

So, of course, on my shiny new 64bit system I did not have access to this nice tool, until now. I have rewritten my own version of it. That was made quite simple thanks to the APIs available in Haiku: the whole thing fit in about 200 lines of code. It is fully compatible with existing apps that used to target the old BeDC app, but it looks a bit nicer, thanks to the modern UI classes in Haiku. For example, when logging a BMessage, it can be shown as a nice foldable structure instead of just a bunch of lines.

I have some future plans for this, mainly to make it more useful with Renga where having some formatting of the XML would be great. But I don't know when I'll get to that. Until then, you can find the sourcecode on my Gerrit.

Pourquoi les hackatons, c'est de la merde

Posted by pulkomandy on Wed May 25 19:02:11 2022  •  Comments (0)  • 

Bon, on est sortis du confinement, on commence à voir des gens organiser des évènements en présentiel. C'est donc logiquement le retour des hackatons organisés par plein d'entreprises. J'ai pas trouvé (en cherchant 3 secondes) de page qui résumait pourquoi c'est pas une bonne idée de faire ou de participer à un hackaton. Alors j'vais en écrire une.

Pour ceux qui n'ont pas suivi le sujet, le principe d'un hackaton est de constituer des équipes qui vont développer un projet en un temps assez court, par exemple 48h (mais maintenant on voit des hackaton de une semaine). Dans ces 48h, il faut avoir une idée et commencer à écrire un "proof of concept", un logiciel qui permet de démontrer rapidement le principe. À la fin, chaque équipe présente son travailm, et un jury décide d'une équipe gagnante. On attend des participants qu'ils travaillent sur leur projet à fond, jour et nuit sans s'arrêter, compte tenu du délai très court accordé.

Bien sûr les participants ne sont pas payés, sauf éventuellement l'équipe gagnante qui récoltera au mieux quelques centaines d'euros (divisé par le nombre de membres de l'équipe, ça ne fait pas un salaire raisonable pour du travail de nuit).

La plupart du temps, le code développé va finir à la poubelle. Et c'est probablement la meilleure fin. Quand on décide de réaliser un projet dans un temps aussi court, on a pas le temps de réfléchir à une architecture propre. On va donc surtout créer de la dette technique et pas grand chose d'exploitable. Si quelqu'un trouve que l'idée de départ est bonne, il fera probablement mieux de refaire le code de zéro.

Parlons des idées justement. Normalement quand on a une idée et qu'on envisage d'en faire quelque chose, soit on la garde secrète jusqu'à que le produit soit prêt, ou alors si c'est un truc un peu plus avancé, on peut éventuellement poser un brevet (qui suppose que l'idée n'était pas diffusée avant, sinon c'est plus possible). Un hackaton, c'est une très bonne solution pour une entreprise de vous forcer à publier gratuitement vos idées pour les réutiliser ensuite. Vous allez donc travailler gratuitement pour eux.

En plus de l'aspect compétition, les hackatons représentent la culture d'entreprise et la façon de travailler la pire qui soit: on va jeter à la fois la qualité du code et la santé des développeurs, juste pour tenir un délai arbitraire. C'est tout le contraire de ce qu'il faudrait faire. Sur le long ou même le moyen terme (plus de deux jours), un projet a surtout besoin d'un code bien architecturé avec une dette technique gérable, et d'une équipe en pleine forme qui peut durer plusieurs années. Je vous laisse réfléchir qui peut avoir intérêt à habituer les développeurs ou futur développeurs à autre chose. Dans un hackaton, l'aspect compétitif est là pour mettre la pression sur les participants et les pousser à bout.

En plus de ça, ce mode de fonctionnement où on s'attend à ce que les gens soient disponibles 48h ou même parfois une semaine d'affilée en continu n'est pas du tout inclusif. Vous ne trouverez par exemple pas parmi les participants (liste bien sûr pas du tout exhaustive):

  • Des personnes qui ont une famille dont ils doivent s'occuper,
  • Des personnes qui ont un emploi ou des études avec des horaires fixes,
  • Des personnes souffrant de toute une liste de handicaps qui nécessitent du repos ou imposent d'autres contraintes,
  • Tout simplement des personnes qui préfèrent avoir une hygiène de vie plus raisonable et n'ont pas envie de faire une ou plusieurs nuit blanches.

Some random thoughts about XMPP spaces

Posted by pulkomandy on Tue Aug 17 17:58:46 2021  •  Comments (0)  • 

You may or may not know I am involved in XMPP in a way or another for some time. Recently I have started work on Renga, an XMPP client for Haiku, and participated in an online meeting and discussion about why Discord is so succesful and what ideas XMPP could borrow from it. A part of the discussion revolved around the way Discord organizes multiple channels in a "server" and how that fits very well with their user base.

Today someone contacted me and shared a work in progress document about XMPP "spaces" which is an attempt to see how something similar could be made in XMPP. I was surprised to see the document dive straight into discussion about protocols and stuff like that, with the UI/UX part being "TODO write this later". I am not sure this is the right way to design the thing. I was asked my input on it, so here it is. I have not a lot of experience of the XMPP protocol, but as a user, I chat on various systems with various people and there are several instances where I can see an use for such things.

So, let's tackle this from the user experience point of view. I will completely ignore the "but how do we do that?" part and focus on what I think could work well. Let's start by going back to basics and define some use cases. Why do we want to do "Spaces" in the first place?

Use cases

Let's imagine that XMPP is a very popular protocol and everyone uses it. No other chat system exists anymore. Let's see what it did to become so succesful. I will take my own usage of various IM networks to see how this could look in that alternate (or future?) world.

Communicating with my family

My parents are doing ok with phones and computers, but still, let's keep things simple for them. A single chat channel and ability to send 1:1 messages will be enough. A media library archiving all the pictures and links we sent to each other could be nice. There will be almost no movement of users joining/leaving this space (maybe a new girlfriend/boyfriend joining the family or leaving after a breakup once every few years?), and everyone can be a moderator or it could be just one person.

I think that was the easy case which is already mostly covered by existing options

In the office

I am working remotely this year and in our company this meant reviewing and improving our chat system, which we now use a lot.

I work in a company that has in total about 800 employees, of which 150 in the local branch I work at. We are software engineers and developers. We work on many projects for different customers. Each team is typically 3 to 30 people (with sub-teams in the largest teams). We also have some people who need to do things for many teams at once (for example our sysadmins are taking care of services and tools deployed globally or specifically for a given team).

In our current chat system, we have one single space for the 150 persons in the local branch. Each project has one or more private channels, which are not listed anywhere in the UI. When people join a project, their account is invited to all the corresponding channels. This works quite nicely for us and there doesn't seem to be a reason to group channels together here.

What we would like to have, however, is a way to create temporary sub-channels for discussing specific issues. Something that would be similar to an e-mail thread. Slack and Zulip are examples of chat systems which allow this. Zulip is very close to the email way, having separate threads at the core of its UI. In Slack, it is done by picking a specific message in a channel and replying to it, which creates a sub-discussion. This would be great to organize our chats and more easily keep the info we need.

Other nice to have features are a way to search for old messages in a specific set of channels (but probably this doesn't need to have them formally tied together as a "space"), and a way to pin things (mainly URLs or file attachments) to be able to find them easily. I can imagine also more advanced features such as a shared calendar to place our meetings and days off work in.

Probably, larger companies will want a more segregated system, and I can imagine companies which have non-computer-inclined people (not software engineers) may need some more centralized admin roles to oversee who has access to what. So that probably means accounts tied to some LDAP server, not being able to list MUCs unless your account is added to the appropriate space, and not allowing people to leave a space on their own because they would be unable to re-join it.

Opensource projects

I contribute to a lot of projects, some large, some small.

I will not go on for very long about the small projects because the existing solution (just a single MUC) is just fine for me. So let's see about the larger ones which have a need to split their discussion into multiple aspects.

So, in this case, the main thing to think about is onboarding. If you don't care about onboarding, you will be just fine with a dozen independant channels which have no apparent relation to each other, except they are listed together on your website, and maybe they all have your project logo or some variation of it on them. If you care about onboarding, you want to make it easy for a newcomer to click on a single button/link and immediately join your Space and discover all the channels from inside there, in their favourite IM client.

You will probably want some kind of "backstage" channel for moderators to discuss ongoing issues. This should not be visible to regular users, of course. Which means multiple channels in a space can have different access rights. On the other hand, you may want to nominate moderators and automatically allow them to be moderators on all the channels. Speaking of moderation, you also want the ability to kick/ban someone from the whole space if they misbehave.

As an opensource project, you want to be transparent and have an archive of everything that was said and shared, possibly over the course of decades. This includes channels that are currently unused because the project was reorganized. Possibly you'll want to split the history of a space because one project was split into two separate parts. You may want to copy it to create a fork of the project while retaining the past history in both branches of the fork. And you may also want to merge the history from two projects together and form a single space, but probably I'm going a little crazy here.

You also want to preserve the privacy of your users. It should not be easily possible to identify that user A in space X is in fact the same person as user B in space Y, if they decided to use different nicknames in each place. On the other hand, you want to be really sure that if you talk to someone named "user A", it is really them, and not some other person using the same nickname.

Another aspect to think about here is notifications. For high traffic channels and projects, probably I won't read everything. I will have the chat client on my computer and read it when I have time or maybe if someone pings me. But I don't want this to ring my phone everytime something happens. It should be a distraction free thing that I can have running in the background. This mean I need easy configuration for which notifications I want on each of my clients. I think both for the whole space, and for specific channels (there may be some channels I have no interest at all in following, maybe because they are in languages I don't understand, maybe because the project is large and some topics are not interesting to me).

Chat with friends

One of my uses of IM currently is organizing board games sessions with friends (but whatever your hobby is, probably some of the same applies). Here, there isn't really a notion of a fixed "space". Some of my friends don't know each other or have met once during a board game afternoon and then never met again. Currently I have one rather large channel with a lot of people but I think I will just create and delete smaller groups as needed. In my case, a "space" is probably not useful here.

Gamer communities

I am a lot less familiar with this. I think a large part of the "opensource" section will also apply. Probably channels with restricted permissions (only a few people can talk) are needed. Also, some nice to have things: custom "stickers"/emojis specific to the server, ability to define and rename roles and assign them specific permissions, ... Just read the Discord documentation.

Chat with strangers

One place where IRC is still somewhat popular. There are chat services with various thematic or so channels (by age, location, or shared interests) all thrown together into a "space". People can join and talk with complete strangers. There are a lot of trolls and people with inappropriate behavior. Users of the service need an easy way to signal such things so a moderator can quickly intervene. If the space is big enough, there will be separate moderator staff for each channel, but probably still a common backstage channel for coordination

Thinking about the user interface

So, what do we need to put in our user interface? Here is an attempt to summarize it:

  • Single-button way to join a space
  • Ability to see a list of channels in a space you joined, with a description of what their purpose is
  • Media library with all pictures/links/? or pinned messages
  • Ability to see long term logs (multiple years) of all channels, including now inactive ones
  • Possibility for space moderators to archive a channel (only past logs available, no way to post new messages)
  • Manage permissions for a single channel and for the whole space (who can talk, who is a moderator, etc)
  • Ability to configure notifications, per-client, globally on a space and more specifically on each channel
  • Know who is joined in a space, ability to reliably ban people (in a way they can't avoid just by rejoining with a different nickname)
  • No way to identify that two users in two different spaces are in fact the same person
  • Multiple levels of administration: the owner of a space can nominate moderators for different channels, control which channels are visible to all users or to users with some specific role only, etc. Moderators can adjust some, but not all, settings of the channel they are moderating
  • Ability to join a space but only join some of the channels inside and not all

In terms of user interface, channels from the same space should of course be grouped together. There will probably be a LOT of channels so you probably won't get away with a single tree view, it will never fit everything on screen. Which means you need a first level with a list of all the spaces, showing which ones have ongoing activity. Then you can select one of the spaces and see the channels inside.

In the XMPP world, one thing to think about is how to handle things that are not in a space. Maybe they can just be put into a "default" space from the UI point of view?

If you know someone's real JID, and you start a chat with them from inside a MUC, it would be super annoying if that ended up being a separate chat history than if you contact them directly. Or maybe it's a feature to have separate discussions (let's say if you have a colleague and you talk work things, but they're also a friend and at other times you talk non-work things).

You will have some kind of management menu (maybe right click on the space icon/name) to decide if you want to leave a space, configure notifications, see who is a moderator or admin.

Quick notes on building gcc

Posted by pulkomandy on Sat Apr 1 13:44:05 2017  •  Comments (0)  • 

This may not be up to date anymore. A complete GCC for AVR (and AMR) is now available as HaikuPorts recipes, which provide a more complete process, with a C library and everything. Refer to these recipes if you need to do it (even on other platforms, the recipes aren't all that hard to read and adjust)

As you may have noticed, I like to develop stuff for all kind of weird devices. For this, I usually need a C compiler, and most of the time it's gcc (not always).

gcc is a big piece of software and there are some tricks needed to build it. Also, I run the Haiku operating system, which is quite nonstandard, so additional workaround are needed.

Today I built gcc for avr. Here are notes on how to do it so I don't spent a month figuring it out next time.

#!/bin/sh
# gcc compilation for gcc 4.4.5 (4.5.x needs more stuff. maybe later)
# Made by PulkoMandy in 2010
# Before you start :
# * Download gmp and mpfr from HaikuPorts (http://ports.haiku-files.org/wiki/Downloads) and extract to /boot
# * Download gcc-core-4.4.5.tar.bz2 from gcc mirror and extract to work directory
mkdir obj && cd obj # This is the output folder. So you can keep the source area clean
setgcc gcc4 # We can't compile gcc4 with gcc2.
../gcc-4.4.5/configure --target=avr --enable-languages=c --prefix=/boot/common/ --with-mpfr=/boot/common/
	# Tell the target, then language we want, and where to install the result. Binaries will be called avr-* so don't care about overwriting other ones.
	# For some obscure reason mpfr isn't detected properly, so we force the prefix.
make all-gcc ; make install-gcc # This does compile only gcc, not libgcc; which failed to work for me.

There are other things to watch out : I had to remove a -lm somewhere as Haiku doesn't have a separate libmath.

Next : build a PlayStation development toolchain, including gcc MIPS target.

Projects I'm NOT coding

Posted by pulkomandy on Sat Apr 1 13:41:22 2017  •  Comments (0)  • 

SometimesI have ideas about software that could be interesting to write or useful to use; but I'm already contributing to a lot of projetcts and I'd rather not start all of the new ones.

Following a talk on #haiku irc channel, I decided to put the list online so other people can pick these projects up and start working on them.

Please let me know if you made (part of) one of them. So I can link to you here :)

PulkoMandy's ever growing TODO-list

Older items first.

  • Port the znax flash game to the Atari Lynx console.
  • Create a device that plug on the amiga clockport and can serve as a base for complicated projects. A DSP to decode OGG would be nice.
  • Build a mouse adapter, similar to AmiPS/2, but using the AMXmouse protocol for the CPC.
  • Compile the SDL version of Road Fighter for the GP2X
  • Port Rick Dangerous2, Prince of Persia 1 and 2, The Lost Vikings 1 and 2, and Jazz JackRabbit 2 to SDL or another lib and make them run on the GP2X.
  • Maplegochi : an electronic Maple tree for the Haiku desktop. You feed it with some water and you watch it grow day after day. The tree is built with random fractals so everyone gets an unique tree on its desktop. It changes over the seasons (depending on the system date and the locale). It is a replicant living on the desktop and acts as a living background,without being too disturbing.
  • Make the various usb to serial chips work on haiku with a simple terminal program.
  • Code a ROMDOS D1 filesystem add-on for Haiku to read/write floppies for Amstrad/Schneider CPC computers.
  • Network-shared whiteboard application for Haiku, allowing people to draw diagrams and see each other drawings. Likely use Playground as a base.

Some projects

Posted by pulkomandy on Fri Nov 8 12:56:42 2013  •  Comments (0)  • 

Some time ago I set up a trac install on my server to put all my running stuff. As I ended up using the provided wiki for tech documents, there is nothing visible on this website. This article list all these projects so they are linked from somewhere, maybe this way Google will index them better.

Also, there's more on my github page, where I plan to migrate most of the stuff above, someday (because git is better, github is more visible, and Trac in fastCGI mode has a very annoying memory leak making it the first source of problems on my homeserver). There's also my Google Code Prject Hosting page, which I plan on not using anymore but there are some projects to migrate, still.

Amelie, a filesystem for 8-bit computers

Posted by pulkomandy on Sat Jun 8 23:23:19 2013  •  Comments (0)  • 

This is a project I've worked on and off for the past year. It all started with the MO5SD project by Daniel Coulom (French page). The idea of this is to use an SD card plugged to the tape port of the french Thomson MO5 computer, which happens to use TTL logic levels.

I reused the ideas from MO5SD to build a similar interface for the Amstrad CPC printer port. However, the Amstrad operating system has better capabilities than the MO5 one and is friendly to expansion ROMs. This makes it possible to run a full-blown filesystem rather than a simple bootloader like the MO5 version does.

Getting the SD card read/write code working was the easy part. After spending some time (with help from SyX and Cloudstrife) optimizing the z80 assembler code for SPI bit banging, I started looking for a suitable filesystem. The most common floppy disc ROMs for the CPC are AMSDOS (the one that shipped with the machine) and Parados, which improves support for dual side, 80 tracks floppy drives from PC. None of them easily allow to reroute disk access to anything else than the floppy controller. I heard that RODOS, a less known disk ROM, has such a capability, and that it also has directories, permissions, and some other nice features. However, close inspection of the RODOS ROM showed that there is actually no way to redirect disk access to anything else than the FDC (if you know a way...)

Moreover, the RODOS filesystem design looked like it would be slow. My bit-bang access to the SD card doesn't go very fast, and jumping around between several sectors isn't a good thing. I wanted to keep files without too much fragmentation so I could load them fast. RODOS is also limited to 20MB volumes, which sounded huge in the 1980s but felt ridiculous for my 4GB SD card. Finally, RODOS requires some RAM, and any ROM that does that on the CPC reduces compatibility with some software.

So, I decided to design my own filesystems. The goals are simple:

  • Use as few RAM as possible. AMSDOS allocates a 2K buffer, that should be the only space we are allowed to use (and some stack, of course).
  • Limit access to the storage medium whenever possible. Try to not read the same sector multiple times.
  • Allow the use of big drives. This requires directories, and also some other tricks.
  • Be z80-friendly. No 32-bit math is ever done.
I worked on a C++ version first. This allowed me to keep the code readable while I experimented with various things. I did all the testing on PC and will start converting the whole stuff to z80 code only when I'm fairly sure there won't be big changes to it again.

An SD card can hold up to 4GB of data. I decided to split this into 256 volumes that map to the CPC notion of "users". On an AMSDOS floppy, each file belongs to a single user and can't be seen by others. This limits each user to 16 megabytes of data. Since the files are allocated on 512-byte sectors, these can all be addressed with a 16-bit offset. I also limited the number of files and directories to fit them on a 16-bit counter.

The filesystem uses a block map with sector granularity (the 16 first sectors are used for that), and a file/directory list based on extents. When the filesystem is not fragmented, a directory will use just 16 bytes of space (including the 11 character name and up to 256 entries), and a file will use a 16 bytes header to store up to 128K of data. These entries can be extended, so directories have unlimited number of entries, and files have unlimited size.

You can read more about the data structures in the filesystem readme, and also check the source code. They are both available at CPCSDK. The C++ code makes use of some C++ features such as vector, but this could easily be converted into plain C. You will also find a WIP ROM code for the z80 version, mostly done by SyX. I'll start filling it with actual disk access code someday, for now it's just managing the patching of AMSDOS code and forwarding the access to floppy drives.

GuideML AmigaGuide converter for Haiku

Posted by pulkomandy on Tue May 28 18:22:36 2013  •  Comments (2)  • 

I'm currently porting some Amiga software (namely, the ACE CPC Emulator). As usual with Amiga software, the user documentations are written in AmigaGuide. I wanted to convert it to a more usual format for Haiku users. I found some tools, but they all run only on Amiga systems. Fortunately, one of them is written in C and open source.

You can get GuideML for Haiku sourcecode at my GitHub account.

The interesting part (development-wise) of this is that I used a set of wrapper header that convert the Amiga API calls into Haiku ones. The Amiga API is based on BCPL, which predates C and has differences such as using FPutC instead of fputc, it also has extra stuff such as support for lists, and a different way to allocate and free memory (where you have to tell FreeVec() the size of the block you're freeing).

This set of header allowed me to make very few changes to the core code of GuideML. I'm also using these headers for the port of ACE, where I could getthe emulator core running quite easily in a short time.

I'm still refining these headers to remove warnings, make them C++ safe (as my user interface for ACE is written in C++) and make them behave as close as possible to the original system. I even reused parts of AROS, an open source rewrite of the Amiga OS, which aims for source-level compatibility. Their ReadArgs implementation was not too hard to port to Haiku, and now my software can use just the same smart argument parsing as on Amiga. I still have to get something similar to icon tooltypes working, however, but that shouldn't be too hard as ReadArgs can parse strings from a file, instead of the command line.

IUP portable user interface

Posted by pulkomandy on Fri Mar 30 22:48:11 2012  •  Comments (5)  • 

Just a quick note to say I started a project with IUP as the framework for user interface. After spending some time with Qt and wxWidgets, I've finally found an UItoolkit that just does what's expected. No need for a precompiler, no replacement of my main function by some wrapper, no rewrite of the C++ STL.

IUP is written in C, but has a nice attribute-based interface that makes it very easy and pleasant to use. I've made good progress on building my windows and the layouting system is nice to work with (still fighting with Qt one...). IUP is cross platform as it uses either comctl32, GTK or Motif. I think I'll write an Haiku/BeAPI backend for it as it's going to be rather useful.

It's quite easy to install in MinGW, just get the prebuilt binaries (either dll or static linked) and includes and drop them at the right place. No need to recompile for hours like wxWidgets.

I still have to see how integration of custom widgets is possible. This gets useful quicker than one may think, as soon as you need an hex-editor, a music-tracker like interface, or something similar. But these seem to be handled by IUP with a generic "grid" control that looks quite flexible. In wxWidgets both of these were a mess, with no easy way to do a custom control like an hex spinbox, and a button grid giving very bad performance. Having to manually call freeze() and thaw() on widgets got boring really quickly. Not to mention the complete lack of threading support...

Let's see how it goes in a few weeks!

IUP with C++

While the documenation suggests a way of using IUP in C++ and encourages it, I was not so happy with using friend functions or static methods. So I came up with my own solution involving a bit of C++11 magic (variadic templates). The result is the IUP++ class that reigsters itself as an IUP callback (with any arguments) and forwards the call to a C++ object and method. It's used this way :

Callback<Gui>::create(menu_open, "ACTION", this, &Gui::doStuff);
Callback<Gui, int, int>::create(toggle, "ACTION", this, &Gui::doMoreStuff);

Where Gui is the class you want to answer to the event, menu_open and toggle are IUP handles to UI objects, "ACTION" is the callback name, this is the object to forward the event to (an instance of Gui), and doStuff and doMoreStuff are methods called. Notice the Callback also needs the parameters to these - that's the second "int" in the second example (the first one being the return type, that defaults to int if missing, but is needed when you add parameters). I'm looking for suggestions on how to make this simpler, as there is still some repetition in it...

Smarter vim filetype detection

Posted by pulkomandy on Tue Jan 3 20:46:05 2012  •  Comments (0)  • 

Vim is, as you may know, is my favorite editor for all development purposes. The syntax highlighting is powerful and extensible easily. Most of the time, the file type detection for this is based on file extensions. Works well, unless you have files named .src or .asm for assembly language on different CPUs...

Vim documentation only shows how to set the filetype guessing from the file extension. Here's an example of doing something a bit more smart.

The idea is to put the CPU name on the first line of the file (in a comment). Then use the powerful regexp match features of vim to detect it:

; vimfiles/ftdetect/z80.vim
au BufRead,BufNewFile *.z80	set filetype=z80
	; The usual way to do it for clear file extensions

func! s:detect()
	if getline(1) =~ z80
		set filetype=z80
	endif
endfunc

au BufRead *.src	call s:detect()
au BufRead *.asm	call s:detect()
	; And the smart one. Note it is useless on BufNewFile,
	; as the file will not have the header yet.

Do a similar file for each of your CPUs.

Note, it should be possible to scan fr the use of particular mnemonics to go without the header, but that requires a bit more work to identify many CPUs. Any volunteer ?

The software archive

Posted by pulkomandy on Sat Jul 16 20:50:40 2011  •  Comments (0)  • 

This is the script that runs my BeOS software archive

This is a website presenting software, similar to aminet for the amiga or others repositories. It runs without a database and is meant to be easily open to external contributions through ftp uploading.

The full script is less than 200 lines of perl and features a category hierarchy, screenshots, and some other useful infos about the software.

As usual, it is distributed under the MIT Licence.

#!/bin/perl -w
use strict;
use CGI::Carp qw(fatalsToBrowser);
use URI::Escape;
use Time::HiRes qw(tv_interval gettimeofday);

my $t0 = [gettimeofday];

my @QUERY = split(/&/, $ENV{'QUERY_STRING'});
my %query;
foreach my $i (@QUERY)
{
    my $mydata;
    my $varname;
    ($varname, $mydata) = split(/=/, $i);
    $query{$varname} = $mydata ;
}
# POST
my $line;
read(STDIN, $line, $ENV{'CONTENT_LENGTH'});
@QUERY = split(/&/, $line);
foreach my $i (@QUERY)
{
    my $mydata;
    my $varname;
    ($varname, $mydata) = split(/=/, $i);
    $query{$varname} = $mydata ;
}

print "Content-Type: text/html\n\n";
print <<ENDHTML;
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta name="keywords" content="Haiku Software Archive BeOS" />
<meta name="description" content="BeOS software archive" />
<meta http-equiv="content-type" content="text/html; charset=iso-8859-15" />
<title>PulkoMandy's BeOS software archive</title>
<link href="style.css" rel="stylesheet" type="text/css" media="screen" />
</head>
<body>
<h1>PulkoMandy's BeOS software archive</h1>
<div>
<p>This is an archive of BeOS software. Unlike BeBits, the files are archived locally so if their origin gets lost, they'll still here safely.</p>

<p>Most of the files in this archive are from the emupt website, gathered by Xeon3D. Thanks to him for the great work.</p>
<p>You can access the archive directly if you don't like this web interface <a href="archive/">here</a>.</p>
<p>Most of the stuff from emupt is still unsorted, have a look <a href="www.emupt.com">here</a>.</p>

<p>The main goal of this website is to try to get some of this software open sourced so it can be improved for future use.
Tracking of the original authors is sometimes difficult, but usually gives good results. They may also be happy to know the
BeOS world is still alive. This goal is the reason for the separation in three folders.</p>
<ul>
<li>Nosource : unfortunately, these apps are closed source. We need to get in touch with the authors and ask them to release the code.</li>
<li>Source: these application are distributed with their source code, but they are not updated anymore. Take over the development of some of them !</li>
<li>Adopted : these apps are living their own life somewhere else.</li>
</div>
ENDHTML
;

if ($query{file}) { # User wants to show details about a file
    my $file = $query{file};
    open(F, "$file.desc");
    my %data;
    my $description;
    while (<F>) {
        my $field;
        my $value;
        ($field, $value) = split(/:/,$_,2);
        if (length($value) > 0 && $field =~ /^[a-z]+$/) {
            $data{$field} = $value;
        } else {
            $description = $description . $_;
        }
    }
    print "
    <h2><a href=\"$file\">Download $data{name}</a></h2>
    <table>
    <tr><th>Short description</th><td>$data{shortdesc}</td></tr>
    <tr><th>Author</th><td><a href=\"$data{url}\">$data{author}</a></td></tr>
    <tr><th>Platform</th><td>$data{platform}</td></tr>
    <tr><th>Version</th><td>$data{version}</td></tr>
    <tr><th>Licence</th><td>$data{licence}</td></tr>
    <tr><th>Date</th><td>$data{year}</td></tr>
    </table>
    <img src=\"$data{screenshot}\" alt=\"screenshot\"/>";
    print "<p>$description</p>";
    close(F);

    print "<a href=\"/~beosarchive/\">Go up</a>";
} else { #User wants the full software list
    print "<h2>Full software list</h2>";

    sub loopDir {
        my $f;
        my($dir) = @_;
        local(*DIR);
        opendir(DIR, "$dir");
        while ($f=readdir(DIR)) {
            next if ($f eq "." || $f eq "..");

            my $path = "$dir/$f";

            if (-d $path) {
                # We found a directory: recurse into it
                print "<li class=\"dir\">$f</li>\n";
                print("<ul class=\"dir\">");
                &loopDir($path);
            } elsif($path =~ /\.desc$/) {
                # we found a .desc file
                $f = substr($f, 0, -5);
                $path = "$dir/$f";
                print "<li class=\"file\"><a href=\"?file=$path\">$f</a></li>\n";
            }
        }
        closedir(DIR);
        print "</ul>";
    }

    print "<div style=\"float:left\"><h3>Files without source</h3><ul>";
    loopDir("archive/nosource");
    print "</ul></div><div style=\"float:left\"><h3>Files with source looking \
        for a maintainer</h3>";
    loopDir("archive/source");
    print "</ul></div><div style=\"float:left\"><h3>Adopted projects</h3>";
    loopDir("archive/adopted");
    print "</ul></div>";
}

my $elapsed = tv_interval ( $t0 );
print "<p style=\"clear:both; width:100%; border-top: 1px solid #ECC;\">\
    Page generated in $elapsed seconds.</p></body>";

SVN to IRC commit bot

Posted by pulkomandy on Thu Jul 14 22:26:08 2011  •  Comments (0)  • 

CommitOMatic is an SVN post-commit hook that connects to IRC and tells the logmessage.

Put this file as "post-commit" in the hooks directory of a subversion repository, and set the settings as you need.

Download
#!/usr/bin/perl -w
# see http://www.javalinux.it/wordpress/2009/10/15/writing-an-irc-bot-for-svn-commit-notification/
# see http://oreilly.com/pub/h/1964
use strict;
# We will use a raw socket to connect to the IRC server.
use IO::Socket;

# The server to connect to and our details.
my $server = "irc.freenode.net";
my $nick = "Commit-O-Matic";
my $login = "pulkobot";
my $channel = "#commits";
# END CONFIGURATION - NO NEED TO CHANGE ANYTHING BELOW

my $repos = $ARGV[0];
my $rev = $ARGV[1];
my $commit = `/usr/bin/svnlook log $repos -r$rev`;
my $user = `/usr/bin/svnlook author $repos -r$rev`;
chomp $user;

# Connect to the IRC server.
my $sock = new IO::Socket::INET(PeerAddr => $server,
                                PeerPort => 6667,
                                Proto => 'tcp') or
                                die "Can't connect\n";
# Log on to the server.
print $sock "NICK $nick\r\n";
print $sock "USER $login 8 * :Commit-O-Matic Robot\r\n";

# Read lines from the server until it tells us we have connected.
while (my $input = <$sock>) {
    # Check the numerical responses from the server.
    if ($input =~ /004/) {
        last; # exit the loop
    }
    elsif ($input =~ /433/) {
        die "Nickname is already in use.";
    }
}

# We are now logged in : join a channel
print $sock "JOIN $channel\r\n";

# Now wait for the "end of name list" message
while (my $input = <$sock>) {
    # Check the numerical responses from the server.
    if ($input =~ /366/) {
        last; # exit the loop
    }
    elsif ($input =~ /433/) {
        die "Nickname is already in use.";
    }
}

# We are now logged in.
my $cmd = "PRIVMSG $channel :";
$repos =~ s#/home/subversion/##;
print $sock "$cmd$repos: $user * r$rev\r\n";
chomp $commit; #svnlook add an extra newline.
chomp $commit; #svnlook add an extra newline.
my $com = $commit;
$com =~ s/\n/\n$cmd/g;
$com =~ s/^/$cmd/g;
print $sock $com;
print $sock "\n";

# Get out of it
sleep(1);
print $sock "QUIT bye... \r\n";
sleep(1);
close($sock);

Opensource your abandonware

Posted by pulkomandy on Tue Nov 30 22:27:26 2010  •  Comments (3)  • 

As you may know, back in 2007 I ressurected the development of GrafX2. this old pixelart program, made only for DOS, was left out 6 years earlier by the authors, that had moved on to more modern computers. Today, graFX2 is amongst the best tool for pixelling, particularly on Linux or other alternative operating systems. Many people are using it daily to draw really nice pictures. The newer versions added a lot of features such as layers, and there's more to come.

This was possible only because the authors decided to release the source when the project stopped. The code wasn't perfectly clean; it was tied to ms-dos with some optimized parts written directly in assembly language and accessing the video card hardware directly. Of course, getting an SDL-based version out of it was not easy. But still, it took considerably less time than rewriting everything from scratch. Also, part of the userbase for the old GrafX2 upgraded to the new one. For some of them it felt like getting back home after years of using suboptimal tools.

During the revival of GrafX2, I had to develop my web searching skills a lot. First, the original GrafX2 website was offline, and the sourcecode was gone with it. Thanks to filewatcher, an ftpsearch engine, and the web archive, I was able to locate a copy on some russian FTP. Then, I wanted to get in touch with the authors to let them know their software finally found some use.

But grafX2 isn't the main purpose of this article. Last month, I downloaded APlayer, a music player for BeOS. After some hacking to get it working on Haiku (which eventually led to uncovering and fixing a compatibility bug), I noticed that most musics from Burned Sounds, my preferred chiptune collection, didn't load. The strange thing is that most of them were in formats supposed to be recognized by APlayer. But looking closer, it turned out they are packed using the Shrink algorithm. This is a packing system from Amiga days, which can be unpacked only on amiga for lack of any sourcecode or format information. Well, that was until yesterday. Using my high web searching skills, I found the author of Shrink and kindly asked him by mail if he was willing to release the sourcecode for hissoftware, the last version being from 1996.

He was a bit surprised to see there was some files using Shrink still around, but he had a linux version of the archiver. This version is now released as GPL sourcecode at sourceforge. This is an important step for me in getting more open source software available ; but also in preserving old files packed in this format. I hope some other people will find it useful too.

I forgot to mention I also made possible the release of a whole lot of other BeOS software made by Arvid and Jonas Norberg. This include the sawteeth sound synthetizer, as well as the backslash n demo, and also some other unfinished code.

The overall message is, for developpers : think about opensourcing your old projects. Even if the source is not as clean as it could be ; even if they are of no use for you ; even if they only work on a dead since 10 years operating system : someone, somewhere, may find it useful. You can visit the Unamintained Free Software page to get some examples of how passing on a project to someone else may work. But not everything goes through this website.

For non-developpers, don't hesitate to get in touch with the devs, even for unmaintained apps, and ask for an open source release. If the software is dead, the author isn't going to get anymoney for it, so why not release it so other people can improve it ? This is the fastest way to get more open source software. And don't be shy, developpers are, above all, normal people, and they do like hearing from users.

Static linking nightmares

Posted by pulkomandy on Mon May 31 11:25:58 2010  •  Comments (0)  • 

I recently ported Reloaded, an Amstrad CPC emulator, to Windows. This turned out to be more complicated than expected, and I encoutered problems for which I can't find any proper solution on the internet, so I decided to tell you how I solved them.

Step one : SDL and wxWidgets

Reloaded is a very special project. It started its life as a fork of the Caprice32 emulator. Caprice was an emulator designed for Windows, but later ported to Linux using SDL to render the screen. SDL didn't provide support for anything else than pixels, and we wanted complex windows, so we decided to use wxWidgets. After some hacks, we got that running : SDL was used for sound and timers, and wxWidgets for display rendering.

It worked quite fine on Linux, but when trying to run it on Windows, I encountered a problem : both SDL and wxWidgets are trying to define the WinMain function to do some special inits and call the application's main. Of course, having two WinMains didn't please my compiler.

wxWidgets offers a nice way to solve that : their WinMain is defined in the macro IMPLEMENT_APP, which is easy to replace with something else if you want so (you have to do somme inits by yourself, but that's ok). SDL, however, doesn't allow you to do so : as soon as you #include SDL.h, it is not possible anymore to define WinMain yourself!

Reloaded is now using Portaudio for sound and doesn't rely on SDL anymore. I also had to disable OpenGL as our code used SDL_Surfaces, and rewrite some timer handling to use native windows functions.

Step 2 : getting it to link

Another particularity of Reloaded is the way its built : we wanted to create a platform-independant core, and a gui part that wraps around it and provide the windows and graphical stuff. This allows for really easy porting : you have very little things to alter in the core and only mess with rewriting the GUI part.

This was counting without autotools limitations : to build these files properly, we had to use different defines on each side. The core will have to look for portaudio includes while the gui will want wxWidgets. We solved that by building them into two separate .a static libraries, then linking these libs to a single executable.

Again, this was working well on Linux, but Windows strangely failed to link the thing, giving undefined references to portaudio and some other DLLs we are using. Also, I was getting an "undefined reference to main" for no apparent reason (as there was a WinMain function in my program). After some searching, I found I was supposed to add -lmingw32 _before_ my .a in g++ command line, or else the runtime loader wouldn't find WinMain. But doing so would result in undefined reference to every possible function in nwxWidgets.

After a full week of trial and error, I finally managed to get it working : you have to link -lmingw32 first, then your static libs (in the right order so they can link to each other), and finally link wxWidgets and other stuff you need.

I don't know why the gcc tools want the static libs to be in the right order while dynamic one will not care, I think that's not very practical and even annoying. But I hope this article will help you solve the same kind of problem, shouldyou ever encounter it.

Atari Lynx development under Linux

Posted by pulkomandy on Sun Jul 5 01:33:33 2009  •  Comments (0)  • 

Bon alors, j'ai une Lynx 2 qui traine dans mon bazar et j'ai décidé d'essayer de programmer un peu dessus. Le hardware a l'air plutôt sympa et assez costaud pour permettre de développer des trucs rapidement. Sprites zoomables et déplaçables ligne par ligne permettant de faire des polygones texturés, son 4 canaux à base de générateurs polynomiaux, et quelques autres gadgets sympa. Donc voilà une catégorie de mon site dédiée à mes aventures avec cette console.

CC65

cc65 est un compilateur pour le 6502 qui est le processeur de la Lynx. Il est fourni avec des bibliothèques de base permettant de faire quelques trucs. Ces bibliothèques sont portables sur plusieurs plateformes (c64, NES, ...) mais avec pas mal de limites. Elles sont loin d'exploiter les capacités de la console à fond. Je vais donc probablement écrire mon propre toolkit pour gérer tout ça. D'autre part cc65 a l'air d'être assez limité niveau optimisation du C, il faut tout faire à la main pour avoir du code efficace, aussi je pense que je vais rapidement devoir apprendre l'assembleur 6502 pour passer aux choses sérieuses. Cela dit, je parle déjà l'ARM et le z80 donc la transition devrait se faire sans trop de mal

CC65 n'est pas disponible dans les paquets Debian, pour des raisons de problèmes de licence. Le code est trop ancien pour avoir été mis sous GPL... Il faut donc faire l'installation à la main. Hereusement, ça se passe plutôt bien.

wget ftp://ftp.musoftware.de/pub/uz/cc65/cc65-sources-2.12.0.tar.bz2 # téléchargement de la dernière version de cc65
tar xvf cc65-sources-2.12.0.tar.bz2 # on décompresse...
cd cc65-2.12.0                      # on va voir ce qu'il y a dans le dossier
make -f make/gcc.mak                # on lance la compilation
sudo make -f make/gcc.mak install   # on installe!

Une seule petite subtilité : il faut ajuster deux variables d'environnement pour dire à cc65 où il est installé. J'ai mis ça dans mon fichier .bashrc pour régler le problème une fois pour toutes.

# Configuration de schemins d'accès importants pour cc65
export CC65_INC=/usr/local/lib/cc65/include 
export CC65_LIB=/usr/local/lib/cc65/lib

Squelette de projet

Bon, voilà, j'ai maintenant un compilateur qui fonctionne. Pour le tester (et pour s'en servir plus tard), on a besoin du package d'exemple de Karri. Ce package contient un projet complet avec makefile et tout le bazar pour générer un fichier .lnx. C'est un programme simple qui fait une petite application de dessin. Ça servira de base à mes prochains projets mais je vais surement faire quelques modifications dans le makefile. Bref, pour le moment, je lance make, et ça me compile un fichier .lnx sans le moindre problème. Cool.

Émulateur: Handy SDL

Pour l'instant je n'ai pas de cartouche ni de câble BLL pour ma Lynx. J'ai donc besoin d'un émulateur pour tester mon bazar. J'ai essayé mednafen qui est dans les paquets Debian, mais il s'est lamentablement planté avec une segfault au premier lancement. Poubelle, donc. J'ai téléchargé le code source de Handy SDL. Petit problème avec cette archive, les dossiers n'ont pas les droits en exécution ce qui empêche la compilation de fonctionner. Un chmod +x -R sur le dossier extrait de l'archive règle le problème. Ensuite, ça compile tout seul. Pour utiliser l'émulateur il faut aussi un dump de la ROM interne de la Lynx. Celui qu'on trouve chez planetemu a l'air très bien. Il semblerait que vous n'ayez pas le droit d'avoir ce fichier si vous ne disposez pas d'une vraie Lynx. Je suis pas spécialiste en droit, renseignez vous.

Pour que Handy marche bien chez moi, je dois mettre le bpp à 16, sinon l'image ne s'affiche pas correctement. À part ça tout se passe bien. J'ai aussi l'impression que la musique de ma cartouche de test (faite avec ABC) ne fonctionne pas. Mais de toutes façons, ABC n'a pas l'air très simple à utiliser donc mon premier objectif pour la Lynx sera de coder un vrai tracker capable d'exploiter à fond les capacités de la machine, qui ont l'air plutôt sympathiques.

La suite...

Prochaines étapes pour quand j'aurais le temps :

  • Faire une cartouche et un cable BLL pour tester des trucs en vrai sur la Lynx,
  • Coder un tracker capable d'exploiter les capacités de la console à fond,
  • Coder un jeu ou une démo sur la console. Commencer simple et puis faire des trucs plus compliqués après,
  • Vendre les jeux programmés (bon là on rêve hein, pas avant 2020 au moins...),
  • Tenter de conquérir le monde !