flag of the United Kingdom
URBAN
Mainframe

User Comments

(for: Experimenting with Distributed Caching)
1 | Posted by: Gabriel Mihalache (Guest) | ~ 1 year, 4 months ago |

I wouldn’t worry about static content. You can round-robin a bunch of servers and you should be fine. You could also implement some kind of hardware-based mod_gz system to spare the CPU. BitTorrent might also help, if you have civilized users.

The main problems I can see are applicable to dynamic content, the kind of thing you pull out of a database, based on request parameters and session data. And then you get a lot of issues: session afinity, distributed transactions, etc.

J2EE’s JNDI and EJB specs manage to reduce all this complexity into a workable API with a few simple rules, but they’re still very burdensome.

P.S. I was planning to ask you want MT plugin were you using so that comment forms have a text recognition image thingy, but then I saw “Powered By: Shapeshifter CMS” :-)

2 | Posted by: DarkBlue (Registered User) | ~ 1 year, 4 months ago |

Actually Gabriel I find exactly the opposite to be true. I guess it depends on the website, content and audience.

I have no problem dealing with requests to the application server since this is a load-balanced cluster serving via mod_gzip. However, large file (video, software, etc) downloads and a growing concern over RSS scalability are what I’m trying ease the burden of here. In these cases, the DCC systems are particularly attractive.

It’s important to stress that I’m not really looking at minimising bandwidth consumption here, nor am I trying to reduce processor load (minimal for file serving). I am more concerned with availability when “spikes” occur.

[what] MT plugin were you using so that comment forms have a text recognition image thingy

You already discovered that I am using a home-grown CMS for this blog. The “Captcha” (“text recognition thingy”) is discussed in my article: “Defending Against Comment Spam”.

3 | Posted by: DarkBlue (Registered User) | ~ 1 year, 4 months ago |

You can round-robin a bunch of servers and you should be fine.

Round-robin is not the best way to perform load-balancing. Granted, it’s the easiest to implement but it is wasteful and inefficient.

In a high-load environment load-balancing has to be algorithmic, that’s the only way that resources can be optimally shared.

4 | Posted by: DarkBlue (Registered User) | ~ 1 year, 4 months ago |

EXPERIMENT SUSPENDED

I am suspending this experiment for a short time because I need to do a little refactoring.

My tests with FreeCache are worthless as the FreeCache file requests are logged to my access log whether they are served from my server or from FreeCache (can anyone tell me how FreeCache manages this little trick?), so I’m going to have to do a little more work to measure the effectiveness of FreeCache.

However, the Coral system really does “seem” to work. The number of downloads from my server (of the file assigned to the Coral network) have been far less than the number of page-views of the host page and far less than the number of downloads of the file assigned to FreeCache.

Thus I deduce that the Coral network has served a large percentage of the requests for that file because I assume that more users would have downloaded the smaller file than the larger one.

Of course, I could be wrong in any or all of my assumptions.

Therefore, I am going to perform the experiment again once I written a click-through counting algorithm to run the links through. I will also investigate ways to measure the efficiency of the FreeCache system.

Watch this space!

5 | Posted by: Reader (Guest) | ~ 1 year, 4 months ago |

FreeCache is send you HEAD requests when it is actually satisfying the GET requests, which is why you are seeing more requests than from Coral.

6 | Posted by: DarkBlue (Registered User) | ~ 1 year, 4 months ago |

Ah, of course. Now I understand. Thanks Reader.

Your Comments
  • Formatting your comments
  • A valid email address is only required if you wish to receive notifications of new comments posted in relation to this page


remember my details:
notify me of new comments:


W3C VALIDATE XHTML
W3C VALIDATE CSS