I was at a talk by Genevieve Bell (Anthropologist), and one example I remembered is that how India supported connectivity to few rural villages (for E-Seva initiative I believe).
The village did not have connectivity, but had a bus that everyday go back and forth between the nearest town. What they did was install a Wifi receiver in the bus. When at the village, the bust automatically reads messages from a computer (again Wifi enabled) that is located at the bus stand, and when at the town, it sends all the messages to Internet, and brings back whatever messages the world has sent back.
India is going to have (may be already have) satellites, which cover its breadth, and then this solution will be obsolete. Nevertheless, it strikes me by its simplicity (rather it is outof the box). Different versions of the same idea can be find in other places. For example, in Ploar Grid the use case is like follows (From what I heard). There are lot of equipments installed that continuously collect data across a large region, and rather than setting up a communication network, a small plane flys through the area and collect data from equipments via Wifi. Similarly professor Tanambaum said “never underestimate the bandwidth of truck full of tapes”, and in the paper “Above the Clouds: A Berkeley View of Cloud Computing” authors recommended that it could be cheaper to Fedex the disks to the cloud computing provider.
Also, there are another set of use cases are emerging. With use cases like Large Hadron Collider and Large Telescopes, the size of data is going out of bounds. For example, March IEEE Spectrum reported an optical receiver with 640Gb/sec. With these systems, peta bytes (10^15) of data are common. Problem is even with 10Gb/sec Ethernet (yep Teragrid is connected via 10Gb/sec Ethernet) takes 20 minutes to send 1 tera bytes, and it takes 11 days to transfer a full peta byte. Therefore, it might be common in the future that we receive literally a container full of data.
These kinds of asynchronous (with very big latencies) communications provide different kinds of interactions, and call for different types of use cases. It is a challenge to figure out how best to use them, and how best to present them to the user. For example, client side validation and preprocessing is very important, and it might make sense to add additional data, which might be useful to the result. For example, if you are searching Google through this way (nobody will if they have a choice), you might need to return all the results not only links (may be small crawl of first few results) etc.
This is a pretty controversial view of the how market/consumer/style driven economy works.
In one or two places it does over simplify (e.g. with computers, just one small piece added part). However, there are truths in it. Actually, more or less “watch TV -> You sucks -> go and work -> buy things -> watch TV” is part of our lives. It is more less the effect imposed by criticizing eyes driven by styles and trends. I do not know how true are the numbers, but if 50% (they say it 99%) of things are thrown off in 6 months, we have a problem.
The original can be found in The Story of Stuff with Annie Leonard.
The article by Leavitt , on January Computer magazine, is a nice, and in my view impartial, discussion on what cloud has and can bring in to the table. It points to few problems that are still at large.
Also the blog post  by James Governor, argues that hardware as the service model of Amazon is the way to go. I agree that nice simple model of AWS simpler, because it enables users to port with relatively less changes, and of course “simpler is better”. However, the higher level of abstractions like App Engine could provide more features like failover, or auto scaling (I mean auto scaling the application, not just increasing machines). Among questions are how general will those features, how much changes do they need to exiting systems, can they solve associated hard problems?
 Neal Leavitt, “Is Cloud Computing Really Ready for Prime Time?,” Computer, vol. 42, no. 1, pp. 15-20, January, 2009.
 Amazon Web Services: an instance of weakness as strength
I finished my Thesis defense yesterday. But actually seems it is old news now :); most people already know through Dr. Sanjiva’s Blog. I still have to do few additions and updates to the thesis, and we are planning to return to Sri Lanka when I am done. I will later post thesis, abstracts, and slides.
My topic is “Enforcing User-Defined Management Logic in Large Scale Systems”. My adviser is prof. Dennis Gannon, and the rest of the committee are prof. Beth Plale , prof. Geoffrey Fox, prof. David B. Leake and Dr. Sanjiva Weerawarne. I would like to thank prof. Gannon and the rest of the committee whose insights and encouragements made this thesis possible.
Furthermore, I would like to heartily thank Dr. Sanjiva, first of all for convincing me to read for a Ph.D—as my friends know, I was pretty decided not to do one by my third year at Moratuwa— and for his unceasing attention, help, and advice though years. If not for him, none of this would have possible. Also my heartfelt thinks go to my wife, Miyuru, my parents, and my bother, for their help and support.
All Six people who originally worked on Axis2 from LSF are in grad school (Jaliya, Ajith, Chathura, Eran, Deepal), and also there are many more who followed them (more detail here). We will see many Ph.Ds from these people soon. However, earning Ph.Ds will be only a small step towards where we want to go; there is lot of hard work ahead.
This is an introductory Video on the LEAD project (Linked Environments for Atmospheric Discovery), which we (Extreme Lab) have been involved for last 5 years. Very brief outline was given here. Details can be found in https://portal.leadproject.org/.
Colombo is now in Google maps, Nice!