Look at Ulteo homepage
Recently I got interested in traffic shaping to simulate various bandwidth capacities. It was a headache to find a working software in that field until I realized that 1) it was easy 2) it was straighforward on Linux kernel since version (2.2.x?).
First of all, you need to modrobe a few kew kernel modules: cls_u32, sch_cbq, ip_tables
Then all you have to do is to use the “tc” utility which is part of the iproute package.
For instance, let’s assume that you want to limit incoming and outgoing traffic to 256kbits/s on your local host, and assuming that you have a 100Mbps capable network interface on eth0, what you have to do is:
# tc qdisc add dev eth0 root handle 1: cbq avpkt 1000 bandwidth 100mbit
# tc class add dev eth0 parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated
# tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dst 0/0 flowid 1:1
Then if you want to change the limit, use “replace” instead of “add” in the second command. For instance:
# tc class replace dev eth0 parent 1: classid 1:1 cbq rate 64kbit allot 1500 prio 5 bounded isolated
You will notice easily that it’s doing the job very well.
Anyway, I went into some troubles when I started to monitor the traffic: the bandwidth that I set with tc doesn’t fit at all with the actual limitation. For instance, when setting 50kbit in tc, I get a real limitation of around 24 *kBytes* per second, which is about 200kbps. At first, I thought it was a problem with “knetdockapp” that I’m using to monitor the traffic. So I used Bandwidthd which shew similar results, and finally, I transferred a big file during 60 seconds and calculated the real rate from the number of bytes that were received. The results were still the same.
So I’m still wondering why there is such a difference between the figure provided to tc and the real shaped bandwidth.
Following its commitment to desktop virtualization solutions, Ulteo, an Open Virtual Desktop Infrastructure company, announced today that they were releasing the first version of their Open Virtual Desktop solution for enterprises. Delivering faster deployment times and ease of management for the IT department, this first release can be integrated easily into an existing professional Linux or Windows IT environment. The solution can be up and running in a few minutes, delivering rich desktop applications to corporate users.
I was recently trying Nivio, the Online Desktop project that raised about 18 M€ to fuel its development, and I noticed there was no wordprocessor available. But I found a solution: open Internet Explorer and launch Ulteo Online Desktop. You’re done!
Google Chrome, a new web browser – based on Apple WebKit – aimed at surfing the world wide web, has been released by Google today. Only available for Windows for now, it should be available for other plateforms such as Linux later. What is funny, in my opinion, is that normally, releasing a new web browser should have made not even a news wavelet in the IT world. But Google is releasing it, so it makes a lot of noise. It’s clear that even if they were to release a toilet bowl, that would generate a lot of press. The good news is that they are releasing this product as Open Source, because (they)”owe a great debt to many open source projects, (…) We’ve used components from Apple’s WebKit and Mozilla’s Firefox” (which are open source projects). The question to Google now is: why don’t you improve Webkit and Firefox features and performance instead of releasing your own web browser? Other question: licensing. According to Wikipedia, Google Chrome is covered by BSD licensing but as far as I know, WebKit is covered by the LGPL. So, find the bug (if any). Regarding Mozilla, I don’t know enough about its licensing, so maybe it can be converted to BSD. I’ve tried to find information about source code licensing on the Chrome web site, but I couldn’t find any in 5 minutes. And by the way, why no antialiasing for text rendering? What else? Hmm… OK, we have a new web browser around there. Let’s dance.
Yes, this is apparently the first sound that has been successfully recorded. The French song “Au clair de la lune”, was recorded by Edouard-Léon Scott in year… 1860 on a phonoautogram. It could only be decoded recently with modern techniques. Listen to it!
[this is a repost of my answer to someone who wondered about security of personal data on Ulteo forums]
Let me explain my way of thinking about this issue.
At first, let’s assume that data integrity and confidentiality are the two needed requirements:
- we want to be able to retrieve data, as long as we need it, whatever happens on the earth (bombs, earthquakes…)
- we want to ensure that no one but authorized people can read the data, use it or modify it
Now there are different cases. Let’s take these two cases to simplify:
- your personal data : most of time they are stored on your computer. As a result, they are totally unsafe for several reasons: someone can break into your computer and steal your data, or you can be stolen your harddrive (laptop), or your house can go into fire etc. A slightly different case is your online data. For instance, Gmail, Yahoo! mail etc. They won’t guarantee anything but “doing the best” to secure the data. It means that it’s likely that they have advanced security systems (but who knows), that they have redundant servers around the planet etc. So, it’s more likely that your data get more secured if they are stored online in this case. Anyway, they are not really confidential: Gmail reads your emails to generate ads for instance. Additionally, it happens that they close accounts, for any reason. I know people who got their Yahoo! mail account closed because the Terms of Services weren’t respected (without any further detail). Later, they were unable to get in touch with someone at Yahoo! to get it back and lost all emails. Maybe in some cases that’s a bug. Worse: laws permit that your data can get accessed by government agencies anytime for any “good reason” (as far as I know that’s the case for Google in the USA and Blackberry in the UK). So there is still a risk to have your data vanish in the air, even if they are stored online on a big service.
- data within a corporation (ie “sensitive data”). Here, everything depends on the corporate’s policy about data security. Most of the time, I think there is a good level of integrity for the data, assumed that there are mechanisms to get the data replicated to other geographical places for instance. Confidentiality is certainly worse because security cannot be perfect, and also because many people within corporates use Gmail, Blackberry and other services intensively, apparently even for sensitive transactions/discussions. This is a real (known) issue for strategic corporations that need a high level of confidentiality.
Now, what I think, is that the key answer to data integrity and confidentiality is:
- redundancy to address the integrity problem
- heavy encryption to address confidentiality
For instance, with tools such as GPG and Thunderbird Enigmail (which are provided and installed by default on Ulteo), you can encrypt your sensitive emails very easily. The only constraint is that you first need to import your receiver public key first, but that needs to be done only once. Then, all you need to do is to select “encrypt message” when writing your email. With a 2048 or 4096 bits encryption key, this even removes the need to have any security or encryption “on the line” (TLS, SSL…).
In this case you can even add a personal gmail account in CC: as a safe backup! You won’t be able to read the email content within Gmail, but if you happen to need it, you can retrieve the email and decrypt it locally. And Gmail won’t be able to read the content of these archives in any way.
In the same spirit, Ulteo also integrates the Kopete “Silc” plugin that provides a totally secured IRC chat.
Now, there is the question of data that are stored at Ulteo. Right now, I can’t tell you more that “we’re doing our best to secure your data”. This means security measures on servers, and replication. But I agree that it’s not an ultimate solution.
We plan to provide an encryption feature that would permit us (and you) to store *only* encrypted data, that could be used/decrypted only by the owner of the data, using his credentials.
In this case, you would have a local secured directory where you could put all your sensitive data, and this would be the same on Ulteo online services. So in the bad case where you would be stolen your harddrive, or in the case Ulteo servers would be cracked, nobody but you couldn’t read your secured data.
Today, I’ve spent some time meeting with Ladislav Bodnar. Ladislav is the (nice) guy behind distrowatch.com!, a reference Linux website, one of the biggest and certainly one of the nicest ones. Ladislav was coming from Tapei, Taiwan for two days in Paris. I didn’t dare tasting his nice present yet, a kind of litchy-candies under a plastic film, because everything is written in (traditionnal) Chinese and I’m so blind about Asian languages. But tomorrow I will do, for sure. Will keep you updated.
The Ulteo main web server is experiencing a big, big, rush. It’s been under an heavy load for two days, and of course, page loading is sometimes slower than expected… We have performed urgency tasks, such as moving static stuff to other servers, but the rush is really too big. That’s the Slashdot effect, which is not turning into a too bad situation because our servers have a good connectivity and still answers… Of course we’re planning to switch to bigger servers, with some redundancy, but this will of course take a few days. So, be patient, register and come back later if you don’t succeed to launch a session or don’t want to wait… Apologizes for that situation, but we didn’t expect a so big and explosive rush…
P S and we are also reading all your warm emails! It seems that you like it, and we are going to make it still better, with some stuff that you do not even expect…