It may seem like Google Chrome’s process separation is a good thing, but actually it really hurts the user experience quite a bit.
For the uninclined, here’s a comic book that explains what Chrome is doing under the hood (click for high rez):
The problem with this, which is the default behavior by the way (though you can choose other behaviors), is that it means any tab you might click to will have to swap in to view. Meaning, the individual tab has been pushed out to virtual memory and needs to be pulled back into RAM. This is extraordinarily annoying for normal usage.
This begs the question of breaking tabs into processes in the first place. Why do it? Well, because web people like to think of websites as “applications”. My response is: if they’re applications, wouldn’t the user be better served if those were native applications split into processes on the desktop anyway? I much prefer a real email reader like (gasp) Outlook to Gmail. We should be aiming to reduce the number of layers involved towards providing people with novel networked services, not increasing them.
I understand the ad-based revenue model doesn’t work so hot on the desktop. And I understand that the cost of entry is higher for desktop applications. But I think the user loses with this belief that the browser should become the platform upon which all applications are written in the future. A friend and I were discussing this yesterday. Someone he knows wants a custom application to be web based.l I mentioned that I could bang out that application as .NET desktop client in an afternoon. Which would better serve the user? One where they need to set up a web server, with many layers of a web framework, or a simple .NET client/server model?