But it is still most certainly a bottleneck, and too many websites have taken to gleefully dumping untenable processing loads on their visitors' computers. You've probably seen it: pages that load up quickly, then lock for a second or longer, preventing you from scrolling, as the browser churns through a backlog of social plugins, videos, and other intensive objects. In a modern browser like Chrome, it's simply annoying, but if you're using an old version of IE, you'd might as well work on your novel between page loads.
View the result and you'll notice your browser will put its head down and work, not letting you do anything else until it's finished. You may even get a prompt asking if you want to stop the script. It's ugly stuff.
If there's a big chunk of processing you just have to drop on the client, like the above, or something more useful like rendering a detailed Raphael map, what can you do to preserve the user experience?
I've found that in almost every case, you can break the work up into manageable chunks, pack them into closures, and iterate through them with an eye on the clock, stepping away from the work with setTimeout at periodic intervals. The result is a simulation of multi-threaded process execution—pseudo-threading.
Clearly in this version you don't get the squares appearing all at once, as in the first example. But, wonder of wonders, you can still scroll while it's drawing them.
You can also pass arguments and store current variable values in each job by nesting your closure.
It's a bit of extra code, but once you start using this pattern, you'll find more and more uses for it. If you know your script is going to go through some big loops, try passing the iterations to do_work, adjust your max_lock and timeout settings, and see how it performs. You may find a paced execution is just what your app needs.