Shared Cache (? Daemon ?)
|Reported by:||a.rieser@…||Owned by:||moo|
Description (last modified by moo)
I don't know how hard it is to implement the feature I'm going to describe now. Maybe I can help out with some work. I am basically expecting a discussion, as this seems to be a really innovative and good idea:
As we have joining forces everywhere around open source software projects, there are a lot of great frameworks being created. Some of them offer the possibility to have a kind of core, that can be shared between different applications/installations. Im a [http:/typo3.org/ TYPO3] guy, so imagine something like one core and many dummy packages (= installations, that are attached via symlinks). I think this or something similar can be found in several other projects. The point is: This is great with an opcode cacher if you use it with mod_php or in an one process fastcgi environment, because of the central cache even low traffic installations can benefit from the cached core files.
When it comes to shared hosting you probably want to have some more security and consider using methods like suexec or simply individual user permissions for your fastcgi environment. So the core files are readonly and shared, every vhost owns an installation which it can't escape. But we have to build redundant caches for each vhost, which makes it impossible to do efficient caching as you may have to limit the cache sizes etc.
And then we have this great xcache feature called "Readonly Cacher Protection", so that the cache can't be touched by anyone else but xcache.
So maybe it would make sense to have something additional to those redundant caches that delivers a "shared cache". In this Case this cache would hold the cached core files.
Im talking about a concept with two caches in xcache, a private one and a shared. In an configuration directive we could define special directories which will not go to the private cache, but instead being written to the shared. In case of searching the cache both caches have to be considered.
Reading might not be the problem (but world-readable is a bad thing), but the different permissions will be tricky at least in case of clearing the cache. So maybe to solve this, there has to be a daemon around that
- owns the shared cache
- knows which directories should be cached there
- is searchable from the different fastcgi driven xcaches
- delivers shared cache content and in case of a miss for a file that has to stay in his hands:
- tells his fastcgi driven friends to store this miss at his place
- accepts new content and saves it
- will invalidate it when needed
So what do you think about that concept?