骨折吃什么水果好| 晒太阳补什么| 红馆是什么地方| 什么东西软化鱼刺最快| 1979属什么| 蛇配什么生肖最好| 功夫2什么时候上映| kms是什么意思| 什么情况下会流前列腺液| 开柙出虎是什么意思| 1月7日是什么星座| 脸上长痣是什么原因造成的| 脚气涂什么药膏| 子宫内膜厚是什么原因引起的| 大便漂浮水面说明什么| 瑞士为什么这么有钱| 男人什么脸型最有福气| 胃疼去医院挂什么科| 宫颈炎用什么药| 为什么叫中日友好医院| iwc手表是什么牌子| 女生第一次是什么感觉| 严重失眠吃什么药管用| 拉黄尿是什么原因| 童养媳什么意思| sobranie是什么烟| 自慰用什么| 口腔溃疡为什么是白色的| 心口疼是什么原因女性| 九锡是什么意思| 为什么智齿到晚上更疼| 11月17日是什么星座| 紫菜和海苔有什么区别| 什么症状提示月经马上要来了| 果实属于什么器官| 外交部发言人什么级别| 凤凰单丛茶属于什么茶| 蜜蜂怕什么| 肾结石不处理有什么后果| 八月十五是什么日子| 黄宗洛黄海波什么关系| 治疗带状疱疹用什么药最好| 脚底脱皮是什么原因| 大便羊粪状吃什么药| 波奇饭是什么意思| 宝宝尿少是什么原因| 三阳开泰是什么意思| 当律师需要什么条件| 为什么香蕉不能放冰箱| 什么食物对心脏有好处| 冒昧是什么意思| 夹不住尿是什么原因| 雪五行属什么| 肾结石不处理有什么后果| 日光性皮炎用什么药膏最有效| 大枣和红枣有什么区别| 亭亭净植是什么意思| 男扮女装叫什么| 七月六号是什么星座| 罄竹难书的罄什么意思| 不将就什么意思| 尖锐湿疣是什么| 人生苦短是什么意思| 橄榄菜长什么样子图片| 顶臀径是什么意思| 达菲是什么药| 做梦哭醒了有什么征兆| 蜂王浆是什么| 尼泊尔是什么人种| 梦见鞋丢了是什么意思| 肛门坠胀吃什么药最好| 中国最贵的烟是什么烟| 小孩尖叫是什么原因| TOYOTA是什么车| 头发汗多是什么原因| ck是什么牌子的包包| 胃胀不消化吃什么药好| 奇葩什么意思| 肠子粘连有什么办法解决| 印度什么人种| 18罗汉都叫什么名字| 宰相的宰最早指什么| 吃什么可以让月经快点来| 雷同是什么意思| 毛孔大什么原因形成的| 日光性皮炎用什么药膏最有效| 为什么会生化妊娠| 胃下面是什么器官| 肾绞痛可能由于什么原因引起| 桑拓木命是什么意思| 宫颈病变是什么意思| 歇菜是什么意思| 白敬亭父母是干什么的| 天天喝豆浆有什么好处和坏处| 玉是什么结构| 上相是什么意思| 姓杨的女孩子取什么名字| 下蛊是什么意思| 11月18日什么星座| 假体是什么| 两岁宝宝拉肚子吃什么药| 水落石出是什么生肖| 梦见自己搬家是什么意思| 2017年属鸡的是什么命| 7月17日是什么日子| 什么好| 油粘米是什么米| 蟋蟀喜欢吃什么| 乙肝属于什么科| 百合花代表什么| 0是什么意思网络语言| 经期喝酒会有什么危害| 乌龟不能吃什么| 巨蟹座和什么座最配对| 售馨是什么意思| 残月是什么意思| 右眼一直跳什么情况| 跳舞有什么好处| 性激素六项查什么| co什么意思| 哔哩哔哩会员有什么用| 左侧附件区囊性占位是什么意思| 木薯粉是什么粉| 1902年属什么生肖| 狗头是什么意思| 吐痰带血丝是什么原因| 司空见惯的惯是什么意思| 妈妈的姑姑叫什么| 什么叫眼睛散光| 胃不好可以喝什么茶| 户籍所在地是什么| 血口喷人是什么意思| 二月四号是什么星座| 吃什么能胖起来| 糖尿病人吃什么水果好| sakura是什么牌子| 休克疗法是什么意思| 切除子宫有什么影响| 荷花什么时候开| pi什么意思| 一个虫一个冉读什么| 男人忽冷忽热说明什么| 梦见烧火做饭是什么意思| 卵巢多囊是什么意思| slogan是什么意思啊| 放纵什么意思| 结膜出血用什么眼药水| 什么的眉毛| 脑供血不足会导致什么后果| 糖尿病都有什么症状| 湿气重怎么调理吃什么| 黄芪和什么搭配最好| 梦想成真是什么意思| 尿多是什么回事| 土耳其浴是什么意思| 大雪是什么意思| 李五行属性是什么| 耳朵出血是什么原因| 胃疼是什么症状| 北京摇号什么时候开始的| 白酒不能和什么一起吃| 急性腹泻拉水吃什么药| 玻璃体切除后对眼睛有什么影响| 偏食是什么意思| 戴的部首是什么| 今天是什么日子 农历| 勃起是什么意思| 生物工程是什么专业| 自己家院子种什么树好| fe是什么元素| 血小板低会引发什么病| 雀子是什么意思| 口五行属什么| 4月1日是什么星座| 什么水果养胃又治胃病| 冷笑是什么意思| 为什么会缺铁性贫血| 盐酸苯海索片治什么病| 诈尸是什么意思| ac是什么基团| super star是什么意思| 什么时候用得| 辩证什么意思| 直博生是什么意思| 卵巢多囊样改变是什么意思| 黄体酮吃了有什么副作用| 妈妈弟弟的儿子叫什么| 仓鼠吃什么食物| 调剂生是什么意思| karl lagerfeld是什么牌子| 什么水果对心脏有好处| 乳腺发炎有什么症状| 腹部胀疼是什么原因| 什么病需要化疗| 蚩尤姓什么| 养性是什么意思| 辟谷可以吃什么| 姿态是什么意思| 突然长胖是什么原因造成的| 罢黜百家独尊儒术是什么意思| 手掌横纹代表什么意思| 荆棘是什么意思| 办狗证需要什么资料| 吃什么升白细胞比较快| 介入科主要看什么病| 车前草有什么功效和作用| 什么叫低钾血症| 过期牛奶有什么用途| 终亡其酒的亡是什么意思| 感冒可以吃什么| 知青是什么| o型血的父母是什么血型| 身上长血痣是什么原因引起的| 十月初八是什么星座| 什么是肠痉挛| 子宫前位是什么意思| 婚前体检都检查什么| 嘴硬是什么意思| 男人射精快吃什么药| hca是什么意思| 吃什么降血压| 肋膈角锐利是什么意思| 干黄酱是什么酱| 牛蛙和青蛙有什么区别| 女性出汗多是什么原因| 外阴痒是什么原因| 横纹肌溶解症是什么原因造成的| 一见如什么| lck是什么意思| upupup是什么意思| 公测是什么意思| 绿茶喝多了有什么危害| 宿便是什么颜色| 热量的单位是什么| 冬阴功汤是什么味道| 投行是做什么的| 抑郁症看什么科| 什么是阴历什么是阳历| 喉咙有异物感吃什么药| 斯沃琪手表什么档次| 精分是什么| 吃什么回奶最快最有效| 伯伯的老婆叫什么| 胎停是什么意思| 做梦梦见牛是什么意思| 小猫吃什么| 向左向右向前看是什么歌| 黄体可能是什么意思啊| 智齿长什么样| 为什么早上起床头晕| 氟利昂是什么| 窦性心动过速是什么原因| 心跳的快是什么原因| 贪小失大什么意思| 脖子发麻是什么原因| 淼是什么意思| 手指甲出现双层是什么原因| 肚子右边是什么部位| 甲鱼跟什么炖最补| 白衬衫配什么裤子好看| 爱情是什么| 月经2天就没了什么原因| 血液属于什么组织| 消化道出血吃什么药| 百度Jump to content

巴黎国际书展闪耀“中国红”

From Wikipedia, the free encyclopedia
百度 中印佛教文学发展过程中形成了许多具有普遍性的主题或题旨,比如源于森林文明的“山林栖居”是佛教独特的修行方式和生活方式,也是佛教文学的重要主题,由此在佛教文学中形成了大量的山居诗。

Diagram of a CPU memory cache operation

In computing, a cache (/k??/ ? KASH)[1] is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.[2]

To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested.

Motivation

[edit]

In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causing propagation delays. There is also a tradeoff between high-performance technologies such as SRAM and cheaper, easily mass-produced commodities such as DRAM, flash, or hard disks.

The buffering provided by a cache benefits one or both of latency and throughput (bandwidth).

A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the cache. Prediction or explicit prefetching can be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether.

The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus.

Operation

[edit]

Hardware implements cache as a block of memory for temporary storage of data likely to be used again. Central processing units (CPUs), solid-state drives (SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, while web browsers and web servers commonly rely on software caching.

A cache is made up of a pool of entries. Each entry has associated data, which is a copy of the same data in some backing store. Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy.

When the cache client (a CPU, web browser, operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.

The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as a cache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access.

During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. The heuristic used to select the entry to replace is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries.

Write policies

[edit]
A write-through cache without write allocation
A write-back cache with write allocation

Cache writes must eventually be propagated to the backing store. The timing for this is governed by the write policy. The two primary write policies are:[3]

  • Write-through: Writes are performed synchronously to both the cache and the backing store.
  • Write-back: Initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block.

A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as a lazy write. For this reason, a read miss in a write-back cache may require two memory accesses to the backing store: one to write back the dirty data, and one to retrieve the requested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data.

Write operations do not return data. Consequently, a decision needs to be made for write misses: whether or not to load the data into the cache. This is determined by these write-miss policies:

  • Write allocate (also called fetch on write): Data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses.
  • No-write allocate (also called write-no-allocate or write around): Data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only.

While both write policies can Implement either write-miss policy, they are typically paired as follows:[4][5]

  • A write-back cache typically employs write allocate, anticipating that subsequent writes or reads to the same location will benefit from having the data already in the cache.
  • A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.

Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with cache coherence.

Prefetch

[edit]

On a cache read miss, caches with a demand paging policy read the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache.

Caches with a prefetch input queue or more general anticipatory paging policy go further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such as disk storage and DRAM.

A few operating systems go further with a loader that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the page cache associated with a prefetcher or the web cache associated with link prefetching.

Examples of hardware caches

[edit]

CPU cache

[edit]

Small memories on or close to the CPU can operate faster than the much larger main memory.[6] Most CPUs since the 1980s have used one or more caches, sometimes in cascaded levels; modern high-end embedded, desktop and server microprocessors may have as many as six types of cache (between levels and functions).[7] Some examples of caches with a specific function are the D-cache, I-cache and the translation lookaside buffer for the memory management unit (MMU).

GPU cache

[edit]

Earlier graphics processing units (GPUs) often had limited read-only texture caches and used swizzling to improve 2D locality of reference. Cache misses would drastically affect performance, e.g. if mipmapping was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel.

As GPUs advanced, supporting general-purpose computing on graphics processing units and compute kernels, they have developed progressively larger and increasingly general caches, including instruction caches for shaders, exhibiting functionality commonly found in CPU caches. These caches have grown to handle synchronization primitives between threads and atomic operations, and interface with a CPU-style MMU.

DSPs

[edit]

Digital signal processors have similarly generalized over the years. Earlier designs used scratchpad memory fed by direct memory access, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache).[8]

Translation lookaside buffer

[edit]

A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of virtual address to physical address translations. This specialized cache is called a translation lookaside buffer (TLB).[9]

In-network cache

[edit]

Information-centric networking

[edit]

Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions.[10]

Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed.[citation needed]

Policies

[edit]
Time aware least recently used
[edit]

The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN, content delivery networks (CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage.

In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and short-lived content should be replaced with incoming content.[11]

Least frequent recently used
[edit]

The least frequent recently used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for network cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be seen as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done by first evicting content from the unprivileged partition, then pushing content from the privileged partition to the unprivileged partition, and finally inserting new content into the privileged partition. In the above procedure, the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition. The basic idea is to cache the locally popular content with the ALFU scheme and push the popular content to the privileged partition.[12]

Weather forecast

[edit]

In 2011, the use of smartphones with weather forecasting options was overly taxing AccuWeather servers; two requests from the same area would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from a nearby query would be used. The number of to-the-server lookups per day dropped by half.[13]

Software caches

[edit]

Disk cache

[edit]

While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory is managed by the operating system kernel.

While the disk buffer, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to as disk cache, its main functions are write sequencing and read prefetching. High-end disk controllers often have their own on-board cache for the hard disk drive's data blocks.

Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local tape drives or optical jukeboxes; such a scheme is the main concept of hierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as hybrid drives.

Web cache

[edit]

Web browsers and web proxy servers, either locally or at the Internet service provider (ISP), employ web caches to store previous responses from web servers, such as web pages and images. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web.[14]

Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli.[15]

Memoization

[edit]

A cache can store data that is computed on demand rather than retrieved from a backing store. Memoization is an optimization technique that stores the results of resource-consuming function calls within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the dynamic programming algorithm design methodology, which can also be thought of as a means of caching.

Content delivery network

[edit]

A content delivery network (CDN) is a network of distributed servers that deliver pages and other web content to a user, based on the geographic locations of the user, the origin of the web page and the content delivery server.

CDNs were introduced in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache.[16]

Cloud storage gateway

[edit]

A cloud storage gateway is a hybrid cloud storage device that connects a local network to one or more cloud storage services, typically object storage services such as Amazon S3. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages.[17]

Other caches

[edit]

The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a DNS resolver library.

Write-through operation is common when operating over unreliable networks, because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side caches for distributed file systems (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.

Web search engines also frequently make web pages they have indexed available from their cache. This can prove useful when web pages from a web server are temporarily or permanently inaccessible.

Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data.

A distributed cache[18] uses networked hosts to provide scalability, reliability and performance to the application.[19] The hosts can be co-located or spread over different geographical regions.

Buffer vs. cache

[edit]

The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering.

Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system.

With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand,

  • reduces the number of transfers for otherwise novel data amongst communicating processes, which amortizes overhead involved for several small transfers over fewer, larger transfers,
  • provides an intermediary for communicating processes which are incapable of direct transfers amongst each other, or
  • ensures a minimum data size or representation required by at least one of the communicating processes involved in a transfer.

With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching.

A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once.

A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible from the perspective of neighboring layers.

See also

[edit]

References

[edit]
  1. ^ "Cache". Oxford Dictionaries. Archived from the original on 18 August 2012. Retrieved 2 August 2016.
  2. ^ Zhong, Liang; Zheng, Xueqian; Liu, Yong; Wang, Mengting; Cao, Yang (February 2020). "Cache hit ratio maximization in device-to-device communications overlaying cellular networks". China Communications. 17 (2): 232–238. doi:10.23919/jcc.2020.02.018. ISSN 1673-5447. S2CID 212649328.
  3. ^ Bottomley, James (1 January 2004). "Understanding Caching". Linux Journal. Retrieved 1 October 2019.
  4. ^ Hennessy, John L.; Patterson, David A. (2011). Computer Architecture: A Quantitative Approach. Elsevier. p. B–12. ISBN 978-0-12-383872-8.
  5. ^ Patterson, David A.; Hennessy, John L. (1990). Computer Architecture A Quantitative Approach. Morgan Kaufmann Publishers. p. 413. ISBN 1-55860-069-8.
  6. ^ Su, Chao; Zeng, Qingkai (10 June 2021). Nicopolitidis, Petros (ed.). "Survey of CPU Cache-Based Side-Channel Attacks: Systematic Analysis, Security Models, and Countermeasures". Security and Communication Networks. 2021: 1–15. doi:10.1155/2021/5559552. ISSN 1939-0122.
  7. ^ "Intel Broadwell Core i7 5775C '128MB L4 Cache' Gaming Behemoth and Skylake Core i7 6700K Flagship Processors Finally Available In Retail". 25 September 2015. Mentions L4 cache. Combined with separate I-Cache and TLB, this brings the total 'number of caches (levels+functions) to 6.
  8. ^ "qualcom Hexagon DSP SDK overview".
  9. ^ Frank Uyeda (2009). "Lecture 7: Memory Management" (PDF). CSE 120: Principles of Operating Systems. UC San Diego. Retrieved 4 December 2013.
  10. ^ Bilal, Muhammad; et al. (2019). "Secure Distribution of Protected Content in Information-Centric Networking". IEEE Systems Journal. 14 (2): 1–12. arXiv:1907.11717. Bibcode:2020ISysJ..14.1921B. doi:10.1109/JSYST.2019.2931813. S2CID 198967720.
  11. ^ Bilal, Muhammad; Kang, Shin-Gak (2014). Time Aware Least Recent Used (TLRU) cache management policy in ICN. 16th International Conference on Advanced Communication Technology. pp. 528–532. arXiv:1801.00390. Bibcode:2018arXiv180100390B. doi:10.1109/ICACT.2014.6779016. ISBN 978-89-968650-3-2. S2CID 830503.
  12. ^ Bilal, Muhammad; et al. (2017). "A Cache Management Scheme for Efficient Content Eviction and Replication in Cache Networks". IEEE Access. 5: 1692–1701. arXiv:1702.04078. Bibcode:2017arXiv170204078B. doi:10.1109/ACCESS.2017.2669344. S2CID 14517299.
  13. ^ Murphy, Chris (30 May 2011). "5 Lines Of Code In The Cloud". InformationWeek. p. 28. 300 million to 500 million fewer requests a day handled by AccuWeather servers
  14. ^ Multiple (wiki). "Web application caching". Docforge. Archived from the original on 12 December 2019. Retrieved 24 July 2013.
  15. ^ Tyson, Gareth; Mauthe, Andreas; Kaune, Sebastian; Mu, Mu; Plagemann, Thomas. Corelli: A Dynamic Replication Service for Supporting Latency-Dependent Content in Community Networks (PDF). MMCN'09. Archived from the original (PDF) on 18 June 2015.
  16. ^ "Globally Distributed Content Delivery, by J. Dilley, B. Maggs, J. Parikh, H. Prokop, R. Sitaraman and B. Weihl, IEEE Internet Computing, Volume 6, Issue 5, November 2002" (PDF). Archived (PDF) from the original on 9 August 2017. Retrieved 25 October 2019.
  17. ^ "Definition: cloud storage gateway". SearchStorage. July 2014.
  18. ^ Paul, S.; Fei, Z. (1 February 2001). "Distributed caching with centralized control". Computer Communications. 24 (2): 256–268. CiteSeerX 10.1.1.38.1094. doi:10.1016/S0140-3664(00)00322-4.
  19. ^ Khan, Iqbal (July 2009). "Distributed Caching on the Path To Scalability". MSDN. 24 (7).

Further reading

[edit]
小棉袄是什么意思 解脲支原体阳性吃什么药最好 肺炎吃什么药好得快 梦到拔牙是什么预兆 什么病会吐血
西瓜霜是什么 怀孕不能吃什么 母鸡是什么意思 mommy什么意思 拔智齿当天可以吃什么
马口鱼是什么鱼 查尿酸挂什么科 大礼是什么意思 女人辟邪带什么最好 炖汤用什么鸡
宣是什么意思 手淫对身体有什么伤害 olp是什么意思 出油多是什么原因 免疫组化是什么意思
卯是什么生肖aiwuzhiyu.com 凌晨一点多是什么时辰hcv9jop5ns0r.cn 歆字取名什么寓意hcv8jop3ns7r.cn 医学上是什么意思hcv9jop0ns6r.cn 牙结石长什么样hcv9jop2ns5r.cn
乳头瘤病毒是什么病hcv8jop0ns8r.cn 老板喜欢什么样的员工hcv8jop0ns0r.cn b超跟彩超有什么区别tiangongnft.com 拜金是什么意思hcv9jop7ns0r.cn 炸酱面用的什么酱hcv8jop9ns9r.cn
蚕屎做枕头有什么好处hcv9jop3ns6r.cn 夕阳无限好只是近黄昏是什么意思hcv9jop8ns0r.cn 智齿什么时候拔合适hcv9jop6ns7r.cn 口腔溃疡吃什么水果hcv9jop4ns8r.cn 炸毛是什么意思hanqikai.com
北京立冬吃什么hcv8jop5ns5r.cn hl是什么意思hcv8jop2ns0r.cn 总是掉头发是什么原因hcv8jop1ns1r.cn 低压高吃什么药xinmaowt.com 2002年是什么命hcv8jop4ns9r.cn
百度