• Caching

2018-12-20 14:40:32
缓存 (Caching) 某个女人很敏锐，但几乎没有记忆……她记得足够多的工作，她努力工作。——丽迪雅-戴维斯 REST Framework 中的缓存与 Django 中提供的缓存实用程序配合良好。 使用具有 apiview 和 viewsets 的缓存 ...
缓存 (Caching)
某个女人很敏锐，但几乎没有记忆……她记得足够多的工作，她努力工作。——丽迪雅-戴维斯
REST Framework 中的缓存与 Django 中提供的缓存实用程序配合良好。

使用具有 apiview 和 viewsets 的缓存
Django 提供了一个 method_decorator 来使用具有基于类的视图的装饰器。这可以与其他缓存装饰器一起使用，例如 cache_page 和 vary_on_cookie。
from rest_framework.response import Response
from rest_framework.views import APIView
from rest_framework import viewsets

class UserViewSet(viewsets.Viewset):

# 为每个用户缓存请求的 url 2小时
@method_decorator(cache_page(60*60*2))
def list(self, request, format=None):
content = {
'user_feed': request.user.get_user_feed()
}
return Response(content)

class PostView(APIView):

# 请求 url 的缓存页面
@method_decorator(cache_page(60*60*2))
def get(self, request, format=None):
content = {
'title': 'Post title',
'body': 'Post content'
}
return Response(content)

注意：cache_page 装饰器仅缓存状态为 200 的 GET 和 HEAD 响应。
展开全文
• Support caching

2020-12-09 14:19:03
<div><p>It would be great if the generated services would have an option for caching. For example, the developer could indicate a cache TTL of 10 seconds for the User service. During this period, any ...
• <div><p>This issue is for tracking of discussion related to normalized caching implementation</p><p>该提问来源于开源项目：apollographql/apollo-android</p></div>
• Distributed Caching Platforms Anil Nori anilnori@microsoft.com
• Caching as a Service: Small-Cell Caching Mechanism Design for Service Providers
• web caching

2009-03-09 22:26:25
Chapter 1 introduces caching and provides some background material to help the rest of the book make sense. In addition, companies that provide caching products are listed here. In Chapter 2, we'll ...
• Intro to Caching,Caching algorithms and caching frameworks源地址： http://www.jtraining.com/component/content/article/35-jtraining-blog/98.htmlIntroduction:A lot of us heard the word cache and ...


Intro to Caching,Caching algorithms and caching frameworks

源地址： http://www.jtraining.com/component/content/article/35-jtraining-blog/98.html

Introduction:
A lot of us heard the word cache and when you ask them about caching they give you a perfect answer but they don’t know how it is built, or on which criteria I should favor this caching framework over that one and so on, in this article we are going to talk about Caching, Caching Algorithms and caching frameworks and which is better than the other.

The Interview:
"Caching is a temp location where I store data in (data that I need it frequently) as the original data is expensive to be fetched, so I can retrieve it faster. "
That what programmer 1 answered in the interview (one month ago he submitted his resume to a company who wanted a java programmer with a strong experience in caching and caching frameworks and extensive data manipulation)
Programmer 1 did make his own cache implementation using hashtable and that what he only knows about caching and his hashtable contains about 150 entry which he consider an extensive data(caching = hashtable, load the lookups in hashtable and everything will be fine nothing else) so lets see how will the interview goes.
Interviewer: Nice and based on what criteria do you choose your caching solution?
Programmer 1 :huh, (thinking for 5 minutes) , mmm based on, on , on the data (coughing…)
Interviewer: excuse me! Could you repeat what you just said again?
Programmer 1: data?!
Interviewer: oh I see, ok list some caching algorithms and tell me which is used for what
Programmer 1: (staring at the interviewer and making strange expressions with his face, expressions that no one knew that a human face can do :D )
Interviewer: ok, let me ask it in another way, how will a caching behave if it reached its capacity?
Programmer 1: capacity? Mmm (thinking… hashtable is not limited to capacity I can add what I want and it will extend its capacity) (that was in programmer 1 mind he didn’t say it)
The Interviewer thanked programmer 1 (the interview only lasted for 10minutes) after that a woman came and said: oh thanks for you time we will call you back have a nice dayThis was the worst interview programmer 1 had (he didn’t read that there was a part in the job description which stated that the candidate should have strong caching background ,in fact he only saw the line talking about excellent package ;) )Talk the talk and then walk the walk
After programmer 1 left he wanted to know what were the interviewer talking about and what are the answers to his questions so he started to surf the net, Programmer 1 didn’t know anything else about caching except: when I need cache I will use hashtableAfter using his favorite search engine he was able to find a nice caching article and started to read.
Why do we need cache?
Long time ago before caching age user used to request an object and this object was fetched from a storage place and as the object grow bigger and bigger, the user had spend more time to fulfill his request, it really made the storage place suffer a lot coz it had to be working for the whole time this caused both the user and the db to be angry and there were one of 2 possibilities
1- The user will get upset and complain and even wont use this application again(that was the case always)
2- The storage place will pack up its bags and leave your application , and that made a big problems(no place to store data) (happened in rare situations )
Caching is a god sent:
After few years researchers at IBM (in 60s) introduced a new concept and named it “Cache”
What is Cache?
Caching is a temp location where I store data in (data that I need it frequently) as the original data is expensive to be fetched, so I can retrieve it faster.
Caching is made of pool of entries and these entries are a copy of real data which are in storage (database for example) and it is tagged with a tag (key identifier) value for retrieval.Great so programmer 1 already knows this but what he doesn’t know is caching terminologies which are as follow:

Cache Hit:
When the client invokes a request (let’s say he want to view product information) and our application gets the request it will need to access the product data in our storage (database), it first checks the cache.
If an entry can be found with a tag matching that of the desired data (say product Id), the entry is used instead. This is known as a cache hit (cache hit is the primary measurement for the caching effectiveness we will discuss that later on).And the percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.
Cache Miss:
On the contrary when the tag isn’t found in the cache (no match were found) this is known as cache miss , a hit to the back storage is made and the data is fetched back and it is placed in the cache so in future hits it will be found and will make a cache hit.
If we encountered a cache miss there can be either a scenarios from two scenarios:
First scenario: there is free space in the cache (the cache didn’t reach its limit and there is free space) so in this case the object that cause the cache miss will be retrieved from our storage and get inserted in to the cache.
Second Scenario: there is no free space in the cache (cache reached its capacity) so the object that cause cache miss will be fetched from the storage and then we will have to decide which object in the cache we need to move in order to place our newly created object (the one we just retrieved) this is done byreplacement policy (caching algorithms) that decide which entry will be remove to make more room which will be discussed below.
Storage Cost:
When a cache miss occurs, data will be fetch it from the back storage, load it and place it in the cache but how much space the data we just fetched takes in the cache memory? This is known as Storage cost
Retrieval Cost:
And when we need to load the data we need to know how much does it take to load the data. This is known as Retrieval cost
Invalidation:
When the object that resides in the cache need is updated in the back storage for example it needs to be updated, so keeping the cache up to date is known as Invalidation.Entry will be invalidate from cache and fetched again from the back storage to get an updated version.
Replacement Policy:
When cache miss happens, the cache ejects some other entry in order to make room for the previously uncached data (in case we don’t have enough room). The heuristic used to select the entry to eject is known as the replacement policy.
Optimal Replacement Policy:
The theoretically optimal page replacement algorithm (also known as OPT or Belady’s optimal page replacement policy) is an algorithm that tries to achieve the following: when a cached object need to be placed in the cache, the cache algorithm should replace the entry which will not be used for the longest period of time.
For example, a cache entry that is not going to be used for the next 10 seconds will be replaced by an entry that is going to be used within the next 2 seconds.
Thinking of the optimal replacement policy we can say it is impossible to achieve but some algorithms do near optimal replacement policy based on heuristics.So everything is based on heuristics so what makes algorithm better than another one? And what do they use for their heuristics?
Nightmare at Java Street:
While reading the article programmer 1 fall a sleep and had nightmare (the scariest nightmare one can ever have)
Programmer 1: nihahha I will invalidate you. (Talking in a mad way)
Cached Object: no no please let me live, they still need me, I have children.
Programmer 1: all cached entries say that before they are invalidated and since when do you have children? Never mind now vanish for ever.
Buhaaahaha , laughed programmer 1 in a scary way, ,silence took over the place for few minutes and then a police serine broke this silence, police caught programmer 1 and he was accused of invalidating an entry that was still needed by a cache client, and he was sent to jail.
Programmer 1 work up and he was really scared, he started to look around and realized that it was just a dream then he continued reading about caching and tried to get rid of his fears.
Caching Algorithms:
No one can talk about caching algorithms better than the caching algorithms themselves
Least Frequently Used (LFU):
I am Least Frequently used; I count how often an entry is needed by incrementing a counter associated with each entry.
I remove the entry with least frequently used counter first am not that fast and I am not that good in adaptive actions (which means that it keeps the entries which is really needed and discard the ones that aren’t needed for the longest period based on the access pattern or in other words the request pattern)
Least Recently Used (LRU):
I am Least Recently Used cache algorithm; I remove the least recently used items first. The one that wasn’t used for a longest time.
I require keeping track of what was used when, which is expensive if one wants to make sure that I always discards the least recently used item.Web browsers use me for caching. New items are placed into the top of the cache. When the cache exceeds its size limit, I will discard items from the bottom. The trick is that whenever an item is accessed, I place at the top.
So items which are frequently accessed tend to stay in the cache. There are two ways to implement me either an array or a linked list (which will have the least recently used entry at the back and the recently used at the front).
I am fast and I am adaptive in other words I can adopt to data access pattern, I have a large family which completes me and they are even better than me (I do feel jealous some times but it is ok) some of my family member are (LRU2 and 2Q) (they were implemented in order to improve LRU caching
Least Recently Used 2(LRU2):
I am Least recently used 2, some people calls me least recently used twice which I like it more, I add entries to the cache the second time they are accessed (it requires two times in order to place an entry in the cache); when the cache is full, I remove the entry that has a second most recent access. Because of the need to track the two most recent accesses, access overhead increases with cache size, If I am applied to a big cache size, that would be a problem, which can be a disadvantage. In addition, I have to keep track of some items not yet in the cache (they aren’t requested two times yet).I am better that LRU and I am also adoptive to access patterns.
-Two Queues:
I am Two Queues; I add entries to an LRU cache as they are accessed. If an entry is accessed again, I move them to second, larger, LRU cache.
I remove entries a so as to keep the first cache at about 1/3 the size of the second. I provide the advantages of LRU2 while keeping cache access overhead constant, rather than having it increase with cache size. Which makes me better than LRU2 and I am also like my family, am adaptive to access patterns.
Adaptive Replacement Cache (ARC):
I am Adaptive Replacement Cache; some people say that I balance between LRU and LFU, to improve combined result, well that’s not 100% true actually I am made from 2 LRU lists, One list, say L1, contains entries that have been seen only once “recently”, while the other list, say L2, contains entries that have been seen at least twice “recently”.
The items that have been seen twice within a short time have a low inter-arrival rate, and, hence, are thought of as “high-frequency”. Hence, we think of L1as capturing “recency” while L2 as capturing “frequency” so most of people think I am a balance between LRU and LFU but that is ok I am not angry form that.
I am considered one of the best performance replacement algorithms, Self tuning algorithm and low overhead replacement cache I also keep history of entries equal to the size of the cache location; this is to remember the entries that were removed and it allows me to see if a removed entry should have stayed and we should have chosen another one to remove.(I really have bad memory)And yes I am fast and adaptive.
Most Recently Used (MRU):
I am most recently used, in contrast to LRU; I remove the most recently used items first. You will ask me why for sure, well let me tell you something when access is unpredictable, and determining the least most recently used entry in the cache system is a high time complexity operation, I am the best choice that’s why.
I am so common in the database memory caches, whenever a cached record is used; I replace it to the top of stack. And when there is no room the entry on the top of the stack, guess what? I will replace the top most entry with the new entry.
First in First out (FIFO):
I am first in first out; I am a low-overhead algorithm I require little effort for managing the cache entries. The idea is that I keep track of all the cache entries in a queue, with the most recent entry at the back, and the earliest entry in the front. When there e is no place and an entry needs to be replaced, I will remove the entry at the front of the queue (the oldest entry) and replaced with the current fetched entry. I am fast but I am not adaptive
-Second Chance:
Hello I am second change I am a modified form of the FIFO replacement algorithm, known as the Second chance replacement algorithm, I am better than FIFO at little cost for the improvement. I work by looking at the front of the queue as FIFO does, but instead of immediately replacing the cache entry (the oldest one), i check to see if its referenced bit is set(I use a bit that is used to tell me if this entry is being used or requested before or no). If it is not set, I will replace this entry. Otherwise, I will clear the referenced bit, and then insert this entry at the back of the queue (as if it were a new entry) I keep repeating this process. You can think of this as a circular queue. Second time I encounter the same entry I cleared its bit before, I will replace it as it now has its referenced bit cleared. am better than FIFO in speed
-Clock:
I am Clock and I am a more efficient version of FIFO than Second chance because I don’t push the cached entries to the back of the list like Second change do, but I perform the same general function as Second-Chance.
I keep a circular list of the cached entries in memory, with the "hand" (something like iterator) pointing to the oldest entry in the list. When cache miss occurs and no empty place exists, then I consult the R (referenced) bit at the hand's location to know what I should do. If R is 0, then I will place the new entry at the "hand" position, otherwise I will clear the R bit. Then, I will increment the hand (iterator) and repeat the process until an entry is replaced.I am faster even than second chance.
Simple time-based:
I am simple time-based caching; I invalidate entries in the cache based on absolute time periods. I add Items to the cache, and they remain in the cache for a specific amount of time. I am fast but not adaptive for access patterns
Extended time-based expiration:
I am extended time based expiration cache, I invalidate the items in the cache is based on relative time periods. I add Items the cache and they remain in the cache until I invalidate them at certain points in time, such as every five minutes, each day at 12.00.
Sliding time-based expiration:
I am Sliding time-base expiration, I invalidate entries a in the cache by specifying the amount of time the item is allowed to be idle in the cache after last access time. after that time I will invalidate it . I am fast but not adaptive for access patterns
Ok after we listened to some replacement algorithms (famous ones) talking about themselves, some other replacement algorithms take into consideration some other criteria like:
Cost: if items have different costs, keep those items that are expensive to obtain, e.g. those that take a long time to get.
Size: If items have different sizes, the cache may want to discard a large item to store several smaller ones.
Time: Some caches keep information that expires (e.g. a news cache, a DNS cache, or a web browser cache). The computer may discard items because they are expired. Depending on the size of the cache no further caching algorithm to discard items may be necessary.
The E-mail!
After programmer 1 did read the article he thought for a while and decided to send a mail to the author of this caching article, he felt like he heard the author name before but he couldn’t remember who this person was but anyway he sent him mail asking him about what if he has a distributed environment? How will the cache behave?
The author of the caching article got his mail and ironically it was the man who interviewed programmer 1 :D, The author replied and said :
Distributed caching:
*Caching Data can be stored in separate memory area from the caching directory itself (who handle the caching entries and so on) can be across network or disk for example.
*Distrusting the cache allows increase in the cache size.
*In this case the retrieval cost will increase also due to network request time.
*This will also lead to hit ratio increase due to the large size of the cache
But how will this work?
Let’s assume that we have 3 Servers, 2 of them will handle the distributed caching (have the caching entries), and the 3rd one will handle all the requests that are coming (Which asks about cached entries):
Step 1: the application requests keys entry1, entry2 and entry3, after resolving the hash values for these entries, and based on the hashing value it will be decided to forward the request to the proper server.
Step 2: the main node sends parallel requests to all relevant servers (which has the cache entry we are looking for).
Step 3: the servers send responses to the main node (which sent the request in the 1st place asking to the cached entry).
Step 4: the main node sends the responses to the application (cache client).
*And in case the cache entry were not found (the hashing value for the entry will be still computed and will redirect either to server 1 or server 2 for example and in this case our entry wont be found in server 1 so it will fetched from the DB and added to server 1 caching list.

Measuring Cache:

Most caches can be evaluated based on measuring the hit ratio and comparing to the theoretical optimum, this is usually done by generation a list of cache keys with no real data, but here the hit ratio measurement assumes that all entries have the same retrieval cost which is not true for example in web caching the number of bytes the cache can server is more important than the number of hit ration (for example I can replace the big entry will 10 small entries which is more effective in web)
Conclusion:
We have seen some of popular algorithms that are used in caching, some of them are based on time, cache object size and some are based on frequency of usage, next part we are going to talk about the caching framework and how do they make use of these caching algorithms, so stay tuned ;)

转载于:https://blog.51cto.com/williamx/1217019
展开全文
• <div><p>caching query results could improve the performance significantly. <p>once done, also update DoctrinePHPCRBundle to integrate the cache with symfony.</p><p>该提问来源于开源项目：doctrine...
• Enyim.Caching.dll

2018-03-20 14:26:55
Enyim.Caching.dll 32位 Enyim.Caching.dll 32位 Enyim.Caching.dll 32位
• A Survey of Caching Mechanisms in
• DNS搭建所需的caching-nameserver-9.3.6-20.P1.el5_8.6.x86_64.rpm
• 由于 StackExchange.Redis 不可靠，导致 Microsoft.Extensions.Caching.Redis 不能放心使用。故使用 CSRedisCore 作为分布式缓存。 Package Name NuGet Downloads CSRedisCore Caching.CSRedis IDistributedCache ...
• hibernate caching

2008-11-07 22:25:14
hibernate caching 培训内容， 熟悉hibernate cache
• Redis-Caching-BlogPostsDB
• Spring Boot Caching缓存简单的demo约束应用中可用的缓存 简单的demo 在想要加缓存的方法上加@Cacheable注解，在类上加@EnableCaching注解或者直接加载启动类上 package com.jsong.wiki.blog; import org.spring...


Spring Caching缓存
简单的demo约束应用中可用的缓存关闭缓存几个重要的注解@Cacheable@CachePut@CacheEvict@Caching
CacheManager

简单的demo
在想要加缓存的方法上加@Cacheable注解，在类上加@EnableCaching注解或者直接加载启动类上
package com.jsong.wiki.blog;

import org.springframework.cache.annotation.Cacheable;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.stereotype.Component;

@EnableCaching
@Component
public class CacheService {

@Cacheable("cache")
public String getName(){
System.out.println("cache");
return "cache";
}
}


当调用getName方法是，第一次调用没有缓存，会执行方法体，第二次调用的时候不会执行方法，直接在缓存中获取数据。

约束应用中可用的缓存
application.yml 只要名字叫cache的缓存，
spring:
cache:
cache-names:
- cache

当访问其他不存在的缓存时，会报错，启动的时候不会报错
java.lang.IllegalArgumentException: Cannot find cache named 'jsong' for Builder[public java.lang.String com.jsong.wiki.blog.CacheService.getName()] caches=[jsong] | key='' | keyGenerator='' | cacheManager='' | cacheResolver='' | condition='' | unless='' | sync='false'

关闭缓存
关闭应用缓存 spring.cache.type=none
spring:
cache:
cache-names:
- jsong
type: none


几个重要的注解
@Cacheable
这个注解可以设置和获取缓存
public @interface Cacheable {
@AliasFor("cacheNames")
String[] value() default {};

@AliasFor("value")
String[] cacheNames() default {};

String key() default "";

String keyGenerator() default "";

String cacheManager() default "";

String cacheResolver() default "";

String condition() default "";

String unless() default "";

boolean sync() default false;
}

支持主要参数
cacheNames,value：作用相同，都是指定缓存的名字，当同时定义cacheNames和value的时候，cacheNames的值要和value的值相同。 demo
    @Cacheable(value = "cache1")
public String getName(){
return "cache1";
}

@Cacheable(value = "cache1")
public String getName2(){
return "cache2";
}

单元测试 在访问getName()时，没有检测到有缓存，执行方法，把值放入到缓存cache1 并返回，getName2()和getName()标注的缓存value都是cache1，所以当执行getName2()时，发现缓存中有数据，并不执行方法，直接从缓存中取数据，所以，没有返回cache2,而是返回cahe1。
    @Test
public void testCache(){
System.out.println(cacheService.getName()); // cache1
System.out.println(cacheService.getName2()); // cache1
}


key：当前缓存的key，可以和value搭配使用，确定缓存。支持SpEL表达式（如：“#参数名”或者“#p参数index”）。
属性描述示例（#root可以省略）methodName当前方法名#root.methodNamemethod当前方法#root.method.nametarget当前被调用的对象#root.targettargetClass当前被调用的对象的class#root.targetClassargs当前方法参数组成的数组#root.args[0]caches当前被调用的方法使用的Cache#root.caches[0].name
demo1
    @Cacheable(value = "cache1", key = "#root.method")
public String getName() {
return "cache1";
}

@Cacheable(value = "cache1", key = "#root.method")
public String getName2() {
return "cache2";
}

单元测试 输出结果 cache1，cache2。因为两个缓存调用方法不是同一个方法，所以返回结果不同
    @Test
public void testCache(){
System.out.println(cacheService.getName()); // cache1
System.out.println(cacheService.getName2()); // cache2
}


demo2
    @Cacheable(value = "cache1", key = "#root.target")
public String getName() {
return "cache1";
}

@Cacheable(value = "cache1", key = "#root.target")
public String getName2() {
return "cache2";
}

单元测试 输出结果返回cache1，cache1。因为两个方法调用的对象都是同一个对象。
    @Test
public void testCache(){
System.out.println(cacheService.getName()); // cache1
System.out.println(cacheService.getName2()); // cache1
}

condition ：只有当条件满足时才会去检查缓存，支持SpEL表达式
demo
    @Cacheable(value = "cache1", key = "#root.target")
public String getName() {
return "cache1";
}

@Cacheable(value = "cache1", key = "#root.target", condition = "#p0>#p1")
public String getName2(int i, int j) {
return "cache2";
}

单元测试 第一次调用getName2方法时，缓存中的数据位cache1，并且condition条件满足，所以输出缓存数据cache1。 第二次调用getName2方法时，condition条件不满足，所以执行方法，输出数据cache2。
    @Test
public void testCache(){
System.out.println(cacheService.getName()); // cache1
System.out.println(cacheService.getName2(2,1)); // cache1
System.out.println(cacheService.getName2(2,3)); // cache2
}

@CachePut
这个注解主要是设置缓存，不能获取缓存 它的属性和@Cacheable差不多，这里就不再赘述了。
demo
    @Cacheable(value = "cache1", key = "#root.target")
public String getName() {
return "cache1";
}

@Cacheable(value = "cache1", key = "#root.target", condition = "#p0>#p1")
public String getName2(int i, int j) {
return "cache2";
}

@CachePut(value = "cache1", key = "#root.target", condition = "#p0>#p1")
public String setCache1(int i, int j) {
return "put-cache1";
}

单元测试
    @Test
public void testCache(){
//        System.out.println(cacheService.getName2(2,1));
//        System.out.println(cacheService.getName2(2,3));
cacheService.setCache1(2,1);
System.out.println(cacheService.getName()); // put-cache1
}

@CacheEvict
清除缓存 当方法执行完成后，清除缓存，如果方法执行过程中报错，则不清除缓存。 新属性beforeInvocation当值为true时，在方法调用前就清除缓存，即使方法报错，也会清除缓存。 新属性allEntries当值为true时，清空所有缓存，忽略key。当值为false时，清空指定key的缓存，默认为false。
demo
    @Cacheable(value = "cache1", key = "#root.target")
public String getName() {
return "cache1";
}

@Cacheable(value = "cache1", key = "#p0", condition = "#p0>#p1")
public String getName2(int i, int j) {
return "cache2";
}

// cache1 指定key赋值
@CachePut(value = "cache1", key = "#root.target", condition = "#p0>#p1")
public String setCache1(int i, int j) {
return "put-cache1";
}

// cache1 指定key赋值
@CachePut(value = "cache1", key = "#p0", condition = "#p0>#p1")
public String setCache2(int i, int j) {
return "put-cache2";
}

// 清除cache1 指定key的缓存
@CacheEvict(value = "cache1", key = "#root.target", condition = "#p0>#p1")
public void evictCache(int i, int j) {
}

//  清除cache1的所有缓存，忽略key
@CacheEvict(value = "cache1", allEntries = true, key = "#root.target", condition = "#p0>#p1")
public void evictAllCache(int i, int j) {
}

@Caching
可以同时定义多个注解属性 cacheable put evict
    @Caching(cacheable = @Cacheable(value = "cache1"), put = @CachePut(value = "cache2", key = "#root.target"), evict = @CacheEvict(value = "cache3", allEntries = true))
public String caching() {
return "caching";
}

CacheManager
除了Spring自带的缓存，还可以用下面这些包管理缓存。
Generic JCache (JSR-107) (EhCache 3, Hazelcast, Infinispan, and others) EhCache 2.x Hazelcast Infinispan Couchbase Redis Caffeine Simple

展开全文
• JPA Caching

2014-04-08 17:37:05
JPA Level 1 caching JPA has 2 levels of caching. The first level of caching is the persistence context.  The JPA Entity Manager maintains a set of Managed Entities in the Persistence Context.  ...
 JPA Level 1 caching
JPA has 2 levels of caching. The first level of caching is the persistence context.

The JPA Entity Manager maintains a set of Managed Entities in the Persistence Context.

The Entity Manager guarantees that within a single Persistence Context, for any particular database row, there will be only one object instance. However the same entity could be managed in another User's transaction, so you should use either optimistic or pessimistic locking  as explained in
JPA 2.0 Concurrency and locking

The code below shows that a find on a managed entity with the same id and class as another in the same persistence context , will return the same instance.

@Stateless public ShoppingCartBean implements ShoppingCart {   @PersistenceContext EntityManager entityManager;   public OrderLine createOrderLine(Product product,Order order) {         OrderLine orderLine = new OrderLine(order, product);         entityManager.persist(orderLine);   //Managed         OrderLine orderLine2 =entityManager.find(OrderLine, orderLine.getId()));      (orderLine == orderLine2)   // TRUE         return (orderLine);     }  } 

The diagram below shows the life cycle of an Entity in relation to the Persistent Context.

The code below illustrates the life cycle of an Entity. A reference to a container managed EntityManager is injected using the persistence context annotation. A new order entity is created and the entity has the state of new. Persist is called, making this a managed entity. because it is a stateless session bean it is by default using container managed transactions , when this transaction commits , the order is made persistent in the database. When the orderline entity is returned at the end of the transaction it is a detached entity.

The Persistence Context can be either Transaction Scoped-- the Persistence Context 'lives' for the length of the transaction, or Extended-- the Persistence Context spans multiple transactions. With a Transaction scoped Persistence Context, Entities are "Detached" at the end of a transaction.

As shown below, to persist the changes on a detached entity, you call the EntityManager's merge() operation, which returns an updated managed entity, the entity updates will be persisted to the database at the end of the transaction.

An Extended Persistence Context spans multiple transactions, and the set of Entities in the Persistence Context stay Managed. This can be useful in a work flow scenario where a "conversation" with a user spans multiple requests.

The code below shows an example of a Stateful Session EJB with an Extended Persistence Context in a use case scenario to add line Items to an Order. After the Order is persisted in the createOrder method, it remains managed until the EJB remove method is called. In the addLineItem method , the Order Entity can be updated because it is managed, and the updates will be persisted at the end of the transaction.

The example below contrasts updating the Order using a transaction scoped Persistence Context verses an extended Persistence context. With the transaction scoped persistence context, an Entity Manager find must be done to look up the Order, this returns a Managed Entity which can be updated. With the Extended Persistence Context the find is not necessary. The performance advantage of not doing a database read to look up the Entity, must be weighed against the disadvantages of memory consumption for caching, and the risk of cached entities being updated by another transaction.  Depending on the application and the risk of contention among concurrent transactions this may or may not give better performance / scalability.

JPA second level (L2) caching
JPA second level (L2) caching shares entity state across various persistence contexts.

JPA 1.0 did not specify support of a second level cache, however, most of the persistence providers provided support for second level cache(s). JPA 2.0 specifies support for basic cache operations with the new Cache API, which is accessible from the EntityManagerFactory, shown below:

If L2 caching is enabled, entities not found in persistence context, will be loaded from L2 cache, if found.

The advantages of L2 caching are:

avoids database access for already loaded entitiesfaster for reading frequently accessed  unmodified entities
The disadvantages of L2 caching are:

memory consumption for large amount of objectsStale data for updated objectsConcurrency for write (optimistic lock exception, or pessimistic lock)Bad scalability for frequent or concurrently updated entities

You should configure L2 caching for entities that are:

read oftenmodified infrequentlyNot critical if stale
You should protect any data that can be concurrently modified with a locking strategy:

Must handle optimistic lock failures on flush/commitconfigure expiration, refresh policy to minimize lock failures
The Query cache is useful for queries that are run frequently with the same parameters, for not modified tables.

The EclipseLink JPA persistence provider caching Architecture
The  EclipseLink caching Architecture is shown below.

Support for second level cache in EclipseLink is turned on by default, entities read are L2 cached. You can disable the L2 cache. EclipseLink caches entities in L2, it caches entity id and state in L2. You can configure caching by Entity type or Persistence Unit with the following configuration parameters:
Cache isolation, type, size, expiration, coordination, invalidation,refreshingCoordination (cluster-messaging)Messaging: JMS, RMI, RMI-IIOP, …Mode: SYNC, SYNC+NEW, INVALIDATE, NONE
The example below shows configuring the L2 cache for an entity using the @Cache annotation

The Hibernate JPA persistence provider caching Architecture
The Hibernate JPA persistence provider caching architecture is different than EclipseLink: it is not configured by default, it does not cache enities just id and state, and you can plug in different L2 caches. The diagram below shows the different L2 cache types that you can plug into Hibernate.

The configuration of the cache depends on the type of caching plugged in. The example below shows configuring the hibernate L2 cache for an entity using the @Cache annotation


展开全文
• 主要介绍了yii2缓存Caching基本用法,结合实例形式分析了Yii2中缓存的配置、设置、读取及判断等基本用法,需要的朋友可以参考下
• Enyim.Caching 2.4

热门讨论 2012-06-24 20:48:57
Memcached客户端Enyim.Caching
• Caching缓存，就是将一些生成代价比较大的常用数据，保存起来重用。一般数据都保存在内存中，因为从内存中读取数据比从数据库等其他地方要快。
• 挂毯缓存 基于 ioko Tapestry-commons 项目，请查看以获取更多信息。 适用于 Tapestry 5.3.8 ... <artifactId>tapestry-caching <version>3.8.0 然后，只需将缓存标记放在 TML 文件中可缓存的内容周围。 <di
• AppFabric Caching

2011-12-21 14:25:54
The Caching service includes the following features: Pre-built ASP.NET providers for session state and page output caching, enabling acceleration of web applications without having to modify applic
• LRU Caching

2016-06-26 13:08:32
题目:LRU Caching(水题一道) 题目链接:http://acm.hust.edu.cn/vjudge/contest/view.action?cid=120063#problem/F 题意:给你一个字符串,表示这些字符出现的先后顺序,只包含大写之母和!,!表示输出指令. 要求按照...
• Spring MVC 中的http Caching

万次阅读 2020-01-13 10:46:03
Spring MVC 中的http Caching Cache 是HTTP协议中的一个非常重要的功能，使用Cache可以大大提高应用程序的性能，减少数据的网络传输。 通常来说我们会对静态资源比如：图片，CSS，JS文件等做缓存。同样的我们可以...
• Caching Techniques

2014-01-12 00:15:54
Caching Techniques Caching is a technique to speed up data lookups (data reading). Instead of reading the data directly from it source, which could be a database or another remote system, the
• Caching.jl：记忆机制
• 主要介绍了Spring Boot Hazelcast Caching 使用和配置详解，小编觉得挺不错的，现在分享给大家，也给大家做个参考。一起跟随小编过来看看吧
• <div><p>As the title says, windows updates are no longer caching. In fact they ajust hange when routed through generic cache.</p><p>该提问来源于开源项目：lancachenet/generic</p></div>