This post is a space to discuss some of the assumptions I make when thinking about the security of Linux. It’s aimed at being a quick reference so does not include extensive references to refeered papers and actual statistics, though I will at some point write a long version. I’ll update the post over time to clarify it.
Once Upon a Time
Trying my best to make the title sound like one of those tales you’d tell your kids when putting them to bed. Those who know me well know that I’m doing a PhD, allegedly on activity confinement, and those who know me even better have witnessed me rant every day for three months about how it’s impossible (because ethnomethodology, phenomenology, embodied interaction, situated action, etc.). So I decided to convert to another religion. I’m now a guru of the church of sandboxing. Hopefully neither cognitive dissonance nor my PhD advisor will catch up on me before my defense (ah ah).
There’s a plethora of tools for app sandboxing out there, on every major OS, and even more people arguing over which is the most secure – nothing I can convince myself to care about. Because all these sandboxing tools assume, in one way or another, that the thing they’re trying to contain is designed to be put in their box. This worldview fits server apps incredibly well: they’re designed to process one type of data, continuously, and to produce a specific output at a specific place for a specific input. Security researchers also got very wealthy exploiting the silicia nugget of mobile phones: phone apps have such little utility and phones such restricted interaction techniques that you never do any substantial multitasking or process any complex kind of data, you have fewer options for app customization than on the desktop, and as a result most mobile apps process their own data rather than your documents.
All of that is wonderful, but when you’re interested in general purpose multitasking-capable complex operating systems, it doesn’t work. Users tend to keep a lot of data around on their desktop OS, they have apps that process multiple formats and they reuse a file across multiple apps. They constantly multitask with apps that don’t care the least about proper password storage, etc. You’re even routinely asked to process data from multiple untrusted sources on a routine basis to earn your salary! And yet apps easily get compromised (especially Linux apps), and stay compromised afterwards. They can destroy all of your data, abuse your resources and steal your root password with surprisingly little effort!
It should be obvious to all that access control policies and “fine-grained” sandboxing are no cure to the disease of the desktop. If not, read field studies on information workers’ daily life, contemplate the sheer complexity of their work days and then come back and ask them if they want to sit and write policies because they get any work done. Our challenge is to have the policy be produced on-the-fly, and with no user cost (time, money or cognitive load) s’il-vous-plaît. Sandbox Utils is my collection of black magic tricks that do just that.
After Martin published his article on the security on Wayland, we received plenty of feedback, and among it emerged a discussion on the difficulty of preventing the spoofing of authentication and authorisation dialogs (the former often being used as a by-product for the latter). Such dialogs appear either when you require a privilege escalation (
gksu-like) or access to a restricted/privileged interface controlled by the compositor/desktop environment. In the system we envision, applications have restricted privileges and some are awarded special ones (such as the ability to record the screen, receive special keyboard input, etc.). When an app needs a privilege it does not naturally have, it must ask for it through an authorisation protocol. Besides, we also need to provide a means of authentication that resists spoofing, for the few cases where authentication remains necessary. In this article, I explore the threat model, security requirements and design options for usable and secure authorisation and authentication on modern Linux.
Errata: this article is not about when to use authorisation, but about how to design it. I perfectly concur to the view that the best permission request is the one that does not involve disturbing the user! The ideas discussed here apply for those few edge cases where we may not be able to design authorisation requests away (updated on 2014-03-28).
It’s been more than 3 years since my last security-related blog post. One might think I lost interest but the reality is that I just suck at blogging. This blog post is meant as a summary of a debate a few of us had privately and publicly on the Wayland ML.
Disclaimer: Although I try to be up to date with everything that surrounds security of X11 and Wayland, what I write in this article may be outdated, incomplete or simply blatantly wrong. This article being the basis for a document I’m planning on writing to help Wayland compositor developers implement secure compositors, I would love to hear your feedback!
Frama-C is a static analysis tool that does not just match “dangerous” function names or code patterns like RATS, and that does more than Splint’s memory management, control flow checks and reachability analysis. Frama-C uses abstract interpretation to analyse the potential values of variables and detect a whole other bunch of bugs in programs. It also provides a specification language to write assertions or pre-conditions on functions and prove that these assumptions hold. Frama-C is designed for correctness: it will report false positives (for instance fail to validate an assertion on the return value of a function) but never true negatives. It focuses on showing the absence of bugs, by proving assertions respect pre-conditions. This has applications in evaluating the safety of critical systems.
What interests us here is the combination of value analysis and slicing, as the slicing lab with my language-based security students this year was a bit… light! In my defence, I didn’t expect them to actually do their homework! We’ll work through combining value analysis and slicing on code samples, starting up with more basic aspects of Frama-C. This post is in its vast majority inspired from the contents of the Frama-C documentation. In particular, many code samples are taken or derived from the Value analysis documentation.
Update: I’ve received interesting feedback on this article from Julien Signoles, one of the many talented people behind Frama-C. I’ve amended/clarified some of the things I discuss in the post, mostly changing ambiguous vocabulary I used to avoid confusions. Julien also explained in more details some aspects of Frama-C which I had forgotten, and so I’ll try to inject his own wisdom into the original article. Thanks Julien!
This post is the first of a hopefully irregular series of articles on the consequences, in the information security industry, of decisions not based on most recent research or even on basic threat modelling but on common sense faux-amis. Password Expiration Policies (PEP from now on) are quite widespread these days, and are justified by a various assumptions about attacker and defender behaviour. They consist of forcing you to change your password on a regular basis to access a service, often restricting syntactically similar passwords.
PEP may sound brilliant at first. However, a superficial economics overview and an attack/attacker breakdown of the problem help to understand why PEP do more harm than good in practice. Demonstration.
A month ago, I was in Portland, attending the X.org Developer Conference (XDC2013) as a Nouveau developer and a board member of the X.org foundation.
After a very long period of stagnation and an accumulation of hacks to support the growth of our Web needs, our website has finally had a full revamp. We’ve hacked our own Octopress theme and updated virtually all the content, hosting more projects than ever before!
Hi there. This is a followup of the Programmable Paper Lantern project which I previously blogged about. Since the last time, I’ve found a much better cage design, ordered a first set of LEDs and a transformer for testing (with a more than DIY solution to power it, as you’ll see…) and I’ve, especially, made the design of the electronics that will be used in the final version. The last bit obviously happened with tremendous help from Martin, who went as far as offering me my first Arduino. And who also re-taught me the basics of electronics. Shame on me, the engineering graduate, but I must say I started almost from scratch on that, and I still know nothing but elementary survival notions.
Over the past last month or so, I have been working on creating the hardware and the software needed to be able to boot/reboot/hard reboot my computers at home. The reason I need this is that I am going away for a few months, away from my computers, and I would like to be able to keep on reverse engineering nvidia’s ptherm.
A possible software-only solution is to use Wake-On-Lan to boot up the computer remotely, ssh to connect to the computer, grub-reboot to select the kernel to boot on at the next reboot and finally, the watchdog to reboot the computer when it crashes. If it seems to you like a pain to use, I definitely agree!
So, a hardware-based solution seems more interesting! The standard solution to solve this issue is called IPMI. The interesting features are:
- Being able to cut the power down and put it back up again;
- Read back some of the state (Power & disk LEDs for instance);
- Having a serial console.
However, I don’t have an IPMI-ready motherboard as they are usually used on servers. I thus decided to make myself an equivalent. The only thing that was worrying me was that I had to be able to control that from the internet. So, one machine had to be up and running! I decided to buy a Raspberry Pi as it was the cheapest and lowest-power consumption computer I could get with an ethernet port and a few General Purpose Input/Output (GPIO).
Using those GPIOs, I can control a custom-made circuit board to cut the power, press the power switch and read the power led state but the real question was about the user interface to wrap those GPIOs. I decided to make a web-based user interface because it was more demo-able and also could be updated in real time for displaying logs and the power LED state.