As you may have probably seen, the website has changed quite dramatically. The reason for this change is because we moved mupuf.org to a new server with a bigger storage space.
While I’m a bit (actually very) late on my review of damn-whats-its-name-again (yes, that late) Tomoyo Linux, I wanted to share a couple of things with you. As you may know, I’m currently studying in a MSc degree of Research in Computer Science (woohoo). Which implies I read papers on a regular basis, and am asked as part of my studies to summarise and present these papers.
In one of my lectures (on the management of large collections of described data), I was asked to review a survey on the performance of Meta-Search Engines. Sounded interesting… Was much more than expected!
To many people out there, parallel programming may not sound very useful, and actually pretty complicated. However, everybody expects processors to have several cores and software to make use of them. The whole purpose of parallel programming is to leverage the capabilities of multi-core processors, but this comes with a cost: we need to rethink our way of programming, moving from the old single-threaded or “a thread per task” programs to applications that describe their work-flow in tasks that can be concurrently processed: what is called task-based programming.
This may sound difficult, but it is necessary to produce code that can actually use several cores, and also scale properly (ie. keep a decent performance level) from single-core processors to computer grids with hundreds of cores. The problem of multi-thread apps made by hand is that often, threads are planned per task, but even worse, the number of logical threads in the program is not equal to the number of physical threads on the CPU, but to the number of tasks, which is fixed. This means the program does not scale at all, unless thread pools are created and managed by hand, which means a huge code overhead for the thread pool management system. Besides, this means you still have to do load balancing between threads on your own, which is again a difficult task.
To the difference of multi-threaded apps with synchronisation code, the goal here is to leave all thread management and task assignment to a specifically built library. In this article, I am going to explain the very basics of Intel Threading Building Blocks, most likely one of the most efficient and simple multi-core programming libraries in the wild.
A month ago, I was in Chicago, attending the X.org Developer Conference (XDC2011) as a Nouveau developer.
CMus is so far my favourite audio player. It is gapless, powerful, scriptable and console-based.
The latter is both an advantage and an inconvenient. Indeed, when procrastinating by browsing the web, I often find myself willing to watch flash-based videos. So, I need to find what console runs CMus to stop the music. I usually launch it in the first console of yakuake, a quake-like terminal for KDE, but stopping the music requires multiples actions.
I could have used KDE’s global keyboard accelerators to send a pause/unpause request to CMus, but I’m far more geeky than that. Instead, I decided to build a remote control to physically add a physical giant button.
While this idea was appealing for the sake of it, I wasn’t fully satisfied. So long for just sending commands, what about receiving data from the computer too? What about displaying visual information to a screen too?
As I know you like videos, here is the video of the current state of the project:
I’m currently an intern at Bordeaux I, working on the security of sensor networks.
Today, I wondered how many articles I read in the last 3 months.
$ find papers/ | wc -l 42
The result was a bit puzzling and hence, I found the following question could be a great candidate to the ultimate question:
How many articles one should read before writing a research article?
If you don’t understand what I’m talking about, google is your friend.
I’ve had a very boring problem for the last couple months, that I could never find the time to diagnose, till two days ago it finally got over my head. My computer would, sometimes, with no apparent reason, refuse to suspend (or actually, it would begin and then, after twenty seconds, interrupt the suspend procedure, breaking all my internet connections, and making the CPU and fans overwork).
This has been going on for a while, and even though I thought that it was linked to a VM I was working on in VirtualBox, I had no clue how to diagnose. Actually, the problem came from defunct processes trying to read a SSHFS share that was a directory in the VM. When the VM would reboot or be shut down, the SSHFS share became invalid. Having a cp process (and probably others like Thunar and ls) trying to connect to it would suffice to trigger the bug. The process would hang up, and killing it would often result in it being stuck as defunct (don’t ask me why, I don’t have the faintest idea).
So, how did this prevent Suspend2RAM from suspending my computer? Well, Suspend2RAM asks applications doing I/O activities to freeze, and it will not suspend if one of the tasks did not answer within 20 seconds, writing this message in dmesg instead :
Freezing of tasks failed after 20.01 seconds (1 tasks refusing to freeze, wq_busy=0):
This is because the defunct process, that is still considered as currently doing I/O, is of course not able to respond to a signal. Now, this bug is annoying from a layman’s point of view and it’s pretty hard to figure out why such a behaviour can not be avoided. As far as I’m concerned, two design mistakes made this possible:
It took us a long time to get our hands on the new arduino but support has finally landed in the Arduide.
As a bonus, the arduino sdk needed is now the arduino-0022 instead of the 0018. I hope you don’t mind the update ;)
Please report bugs and features requests using the “new issue” link.
As of today, mupuf.org should be accessible in IPv6.
Linux 2.6.37 has just been released and I’ve been surprised to be quoted on the LinuxFR release summary.
Indeed, some of my work on nouveau has been integrated mainline. I won’t say it was easy as it is a lot of work but it is worth it!
I am now listed in the new contributor list of Linux 2.6.37 :)
I’ll keep you updated on my work soon!