虚火旺吃什么去火最快| 燕窝是什么东西做的| 牛油果核有什么用| 六块钱的麻辣烫是什么意思| 50分贝相当于什么声音| 农历3月是什么月| 手脚浮肿是什么原因| 隋炀帝叫什么名字| 干什么| 钥字五行属什么| 葡萄糖偏高有什么问题| cj什么意思| 经常打嗝是什么原因引起的| 藕是什么季节的| hpv检查是什么| 乌灵参是什么东西| 刷酸什么意思| 睡觉腿抽筋是什么原因| 什么是免疫力| 婧五行属什么| 肾结石不能吃什么食物| 表虚不固是什么意思| 德国高速为什么不限速| 摩羯座是什么象星座| 急性荨麻疹用什么药| 九月十七日是什么星座| 子宫内膜炎吃什么药| 二级烫伤是什么程度| 嗜酸性肉芽肿是什么病| 尿肌酐低说明什么| 吃东西感觉口苦是什么原因| 脑溢血有什么症状| dragon是什么意思| 为什么会散光| 96100是什么电话| 无稽之谈是什么意思| 四月二号是什么星座| 肚脐的左边疼是什么原因| 头发晕是什么病的征兆| 进国企需要什么条件| 卿卿什么意思| bunny是什么意思| 天秤座什么象星座| 脚底脱皮用什么药| 阳痿什么症状| 兰蔻是什么品牌| 肌层彩色血流星点状是什么意思| 童心未泯是什么意思| 婴儿蚊虫叮咬红肿用什么药| 做胃镜前要注意什么| 1969年属什么生肖| 卵泡期是什么时候| 熊猫为什么有黑眼圈| 12.18是什么星座| 不什么其什么| 什么叫消融手术| 关东八大碗都有什么| 足金是什么意思| 婚检查什么| 宵字五行属什么| 来月经同房有什么影响| 眼压是什么| 纤维蛋白原是什么意思| 梦见红色的蛇是什么意思| 测血糖挂什么科| 吃完饭就犯困是什么原因| 肺脓肿是什么病严重吗| 杨梅是什么季节的水果| 王加几念什么| 姐姐的女儿叫什么称呼| 什么防晒霜效果最好| 骏五行属什么| 不知道为了什么| 舒化奶适合什么人喝| 一见倾心什么意思| 离子四项是检查什么的| 什么颜薄命| 恒源祥属于什么档次| 脱氧核苷酸是什么| 977是什么意思| 低血糖挂什么科| 风口浪尖是什么意思| 个性化是什么意思| bpc是什么意思| 枕头底下放剪刀有什么说法| 什么是友谊| 腰椎骨质增生是什么意思| 繁花似锦什么意思| 什么是象限| 治疗晕病有什么好方法| 碧螺春属于什么茶类| 一龙一什么填十二生肖| lime是什么水果| 男属鸡的和什么属相最配| 56岁属什么| 7月26日是什么星座| 周瑜属什么生肖| 吃菌子不能吃什么| 制动是什么意思| 拍立得相纸为什么这么贵| 脚上有水泡是什么原因| 谜底是什么意思| 谷草转氨酶是指什么| 痔疮吃什么药效果好| 大便成细条状是什么病| 脚掌疼是什么原因| 戒心是什么意思| 上户口需要什么资料| 打白条是什么意思| 12.28是什么星座| 坚韧不拔是什么生肖| 猫咪都需要打什么疫苗| 718是什么星座| 白天不咳嗽晚上咳嗽是什么原因| 安陵容为什么叫安小鸟| 掉头发吃什么恢复最快| 深蹲有什么好处| 胃气上逆吃什么药| 搬新家有什么讲究和准备的| 择期手术是什么意思| 为什么一个月来两次月经| 眼角红肿用什么药| 胃功能四项检查是什么| 下面痒用什么药效果好| 频繁小便是什么原因| 吃什么降低胆固醇| 怀孕吃什么宝宝会白| dna什么意思| salomon是什么牌子| 天机不可泄露是什么意思| 肝阳虚吃什么中成药| 1970年属什么生肖| 扁桃体切除对身体有什么影响| 儿白是什么意思| 戒指上的s925是什么意思| 为什么会耳鸣| 慢性盆腔炎吃什么药效果好| 鬼压床是什么| 皮肤瘙痒用什么药| 天贝是什么东西| 脊椎炎有什么症状| 什么东西可以淡化疤痕| 男朋友过生日送什么礼物最有意义| 女人下面水多是什么原因| 食物链是什么意思| 吃什么可以治痔疮| 禄存是什么意思| 什么药降尿酸最好| 经常吃红枣有什么好处和坏处| 0元购是什么意思| 胸部正侧位片检查什么| 几斤几两是什么意思| 1月29日什么星座| 父母是o型血孩子是什么血型| 美国为什么叫鹰酱| 叶凡为什么找石昊求救| 印度洋为什么叫印度洋| 旋转跳跃我闭着眼是什么歌| 肝胆脾挂什么科| 戒指丢了暗示着什么| 口苦吃什么中药| 医院三甲是什么意思| 肝炎挂什么科| 为什么腿会酸痛| 公检法是什么| 屁多屁臭是什么原因| x58主板配什么cpu| 被蜱虫咬了挂什么科| 榴莲吃多了有什么危害| 孕妇过敏可以用什么药| 什么样的月光| 别出心裁是什么意思| 出其不意下一句是什么| 何许人也是什么意思| 鳞状上皮增生是什么病| 一朵什么| 什么病不能吃玉米| 强是什么生肖| 尿酸高中医叫什么病| 吉利丁片是什么东西| 精神慰藉什么意思| 吃过榴莲不能吃什么| 1990年是什么命| 喜气洋洋是什么意思| 鸡肉煲汤加搭配什么好| cr是什么检查| 什么是智齿| 鬼压床是什么意思| 七月十六是什么日子| 国帑是什么意思| 蛀牙的早期症状是什么| 血友病是什么遗传方式| 感冒咳嗽吃什么水果好| 孕妇梦见掉牙齿是什么意思| 胡萝卜和什么不能一起吃| 指甲盖凹凸不平是什么原因| 牙齿一吸就出血是什么原因| 封神是什么意思| 什么是性骚扰| 桂花什么时候开花| 胃胀气吃什么药见效快| 跳蛋是什么意思| 金童玉女是什么意思| 幽门螺旋杆菌感染有什么症状| 男性为什么长丝状疣| 付诸行动是什么意思| 440分能上什么大学| 吃什么药不能献血| 冰晶是什么东西| 近字五行属什么| 新生儿脸上有小红点带白头是什么| 梦见自己光脚走路是什么意思| 耐是什么意思| 干咳是什么病的前兆| 心脏右边是什么器官| 胎盘宫底后壁是什么意思| 什么是性上瘾| 属龙的和什么属相最配| 沼泽是什么意思| 土地出让和划拨有什么区别| 消化内科是看什么病的| 什么样人不能吃海参| 破冰是什么意思| 行房出血是什么原因| 半枝莲有什么功效| 8月14是什么星座| 妈妈生日送什么礼物好| 可人是什么意思| 蟑螂有什么危害| 鞠婧祎什么星座| 女人的第二张脸是什么| ly是什么意思| 大伽是什么意思| 犟嘴是什么意思| 身上长扁平疣是什么原因造成的| 100是什么意思| 嘚瑟是什么意思| 戴隐形眼镜用什么眼药水| 肾透析是什么意思| 腋臭看什么科| 80年属什么生肖| 心脏造影是什么意思| 十月十一是什么星座| 接触是什么意思| 会考没过有什么影响| 同房出血什么原因| 肚脐眼周围疼是什么原因| 梦见自己嫁人了预示着什么| 梦到自己生病了什么意思| 梦见儿子拉屎是什么意思| 氧化铜什么颜色| 猫摇尾巴是什么意思| 1995年的猪五行属什么| 电解水是什么水| 什么是素质| 2002年是什么年| 仿佛是什么意思| 为什么射出的精子里有淡红色| 什么是偏头痛| 虾膏是什么| 看指甲挂什么科| 什么妖魔鬼怪什么美女画皮| cartoon什么意思| 血脂看什么指标| 百度Jump to content

2018数博会5月开幕 多部大数据前沿研究著作将发布

From Wikipedia, the free encyclopedia
百度 同时也将通过技术创新、大数据等优势,为近500万商家的保险需求提供更好的服务,不断提升平台服务能力,为社会创造价值。

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread;[1] for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.[2]

The word "non-blocking" was traditionally used to describe telecommunications networks that could route a connection through a set of relays "without having to re-arrange existing calls"[This quote needs a citation] (see Clos network). Also, if the telephone exchange "is not defective, it can always make the connection"[This quote needs a citation] (see nonblocking minimal spanning switch).

Motivation

[edit]

The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.

Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority or real-time task, it would be highly undesirable to halt its progress.

Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities for parallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.

Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use in interrupt handlers: even though the preempted thread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock. While this can be rectified by masking interrupt requests during the critical section, this requires the code in the critical section to have bounded (and preferably short) running time, or excessive interrupt latency may be observed.[3]

A lock-free data structure can be used to improve performance. A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on a multi-core processor, because access to the shared data structure does not need to be serialized to stay coherent.[4]

Implementation

[edit]

With few exceptions, non-blocking algorithms use atomic read-modify-write primitives that the hardware must provide, the most notable of which is compare and swap (CAS). Critical sections are almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field of software transactional memory promises standard abstractions for writing efficient non-blocking code.[5][6]

Much research has also been done in providing basic data structures such as stacks, queues, sets, and hash tables. These allow programs to easily exchange data between threads asynchronously.

Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include:

  • a single-reader single-writer ring buffer FIFO, with a size which evenly divides the overflow of one of the available unsigned integer types, can unconditionally be implemented safely using only a memory barrier
  • Read-copy-update with a single writer and any number of readers. (The readers are wait-free; the writer is usually lock-free, until it needs to reclaim memory).
  • Read-copy-update with multiple writers and any number of readers. (The readers are wait-free; multiple writers generally serialize with a lock and are not obstruction-free).

Several libraries internally use lock-free techniques,[7][8][9] but it is difficult to write lock-free code that is correct.[10][11][12][13]

Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order. Optimizing compilers can aggressively re-arrange operations. Even when they don't, many modern CPUs often re-arrange such operations (they have a "weak consistency model"), unless a memory barrier is used to tell the CPU not to reorder. C++11 programmers can use std::atomic in <atomic>, and C11 programmers can use <stdatomic.h>, both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers.[14]

Wait-freedom

[edit]

Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with starvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes.[15] This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high.

It was shown in the 1980s[16] that all algorithms can be implemented wait-free, and many transformations from serial code, called universal constructions, have been demonstrated. However, the resulting performance does not in general match even na?ve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs.

Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown[17] that the widely available atomic conditional primitives, CAS and LL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads.

However, these lower bounds do not present a real barrier in practice, as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems. Typically, the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required[citation needed] is greater.[clarification needed]

Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan and Petrank[18] presented a wait-free queue building on the CAS primitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott,[19] which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank[20] provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank[21] provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures.

Under reasonable assumptions, Alistarh, Censor-Hillel, and Shavit showed that lock-free algorithms are practically wait-free.[22] Thus, in the absence of hard deadlines, wait-free algorithms may not be worth the additional complexity that they introduce.

Lock-freedom

[edit]

Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes progress (for some sensible definition of progress). All wait-free algorithms are lock-free.

In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm is not lock-free. (If we suspend one thread that holds the lock, then the second thread will block.)

An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, if N processors are trying to execute an operation, some of the N processes will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors.

In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion.

The decision about when to assist, abort or wait when an obstruction is met is the responsibility of a contention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations.

Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running.

Obstruction-freedom

[edit]

Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation.[15] All lock-free algorithms are obstruction-free.

Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continually live-locking is the task of a contention manager.

Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.

See also

[edit]

References

[edit]
  1. ^ G?etz, Brian; Peierls, Tim; Bloch, Joshua; Bowbeer, Joseph; Holmes, David; Lea, Doug (2006). Java concurrency in practice. Upper Saddle River, NJ: Addison-Wesley. p. 41. ISBN 9780321349606.
  2. ^ Herlihy, M.; Luchangco, V.; Moir, M. (2003). Obstruction-Free Synchronization: Double-Ended Queues as an Example (PDF). 23rd International Conference on Distributed Computing Systems. p. 522.
  3. ^ Butler W. Lampson; David D. Redell (February 1980). "Experience with Processes and Monitors in Mesa". Communications of the ACM. 23 (2): 105–117. CiteSeerX 10.1.1.142.5765. doi:10.1145/358818.358824. S2CID 1594544.
  4. ^ Guillaume Mar?ais, and Carl Kingsford. "A fast, lock-free approach for efficient parallel counting of occurrences of k-mers". Bioinformatics (2011) 27(6): 764-770. doi:10.1093/bioinformatics/btr011 "Jellyfish mer counter".
  5. ^ Harris, Tim; Fraser, Keir (26 November 2003). "Language support for lightweight transactions" (PDF). ACM SIGPLAN Notices. 38 (11): 388. CiteSeerX 10.1.1.58.8466. doi:10.1145/949343.949340.
  6. ^ Harris, Tim; Marlow, S.; Peyton-Jones, S.; Herlihy, M. (June 15–17, 2005). "Composable memory transactions". Proceedings of the 2005 ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '05 : Chicago, Illinois. New York, NY: ACM Press. pp. 48–60. doi:10.1145/1065944.1065952. ISBN 978-1-59593-080-4. S2CID 53245159.
  7. ^ libcds - C++ library of lock-free containers and safe memory reclamation schema
  8. ^ liblfds - A library of lock-free data structures, written in C
  9. ^ Concurrency Kit - A C library for non-blocking system design and implementation
  10. ^ Herb Sutter. "Lock-Free Code: A False Sense of Security". Archived from the original on 2025-08-05.
  11. ^ Herb Sutter. "Writing Lock-Free Code: A Corrected Queue". Archived from the original on 2025-08-05.
  12. ^ Herb Sutter. "Writing a Generalized Concurrent Queue".
  13. ^ Herb Sutter. "The Trouble With Locks".
  14. ^ Bruce Dawson. "ARM and Lock-Free Programming".
  15. ^ a b Anthony Williams. "Safety: off: How not to shoot yourself in the foot with C++ atomics". 2015. p. 20.
  16. ^ Herlihy, Maurice P. (1988). Impossibility and universality results for wait-free synchronization. Proc. 7th Annual ACM Symp. on Principles of Distributed Computing. pp. 276–290. doi:10.1145/62546.62593. ISBN 0-89791-277-2.
  17. ^ Fich, Faith; Hendler, Danny; Shavit, Nir (2004). On the inherent weakness of conditional synchronization primitives. Proc. 23rd Annual ACM Symp.on Principles of Distributed Computing (PODC). pp. 80–87. doi:10.1145/1011767.1011780. ISBN 1-58113-802-4.
  18. ^ Kogan, Alex; Petrank, Erez (2011). Wait-free queues with multiple enqueuers and dequeuers (PDF). Proc. 16th ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPOPP). pp. 223–234. doi:10.1145/1941553.1941585. ISBN 978-1-4503-0119-0.
  19. ^ Michael, Maged; Scott, Michael (1996). Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms. Proc. 15th Annual ACM Symp. on Principles of Distributed Computing (PODC). pp. 267–275. doi:10.1145/248052.248106. ISBN 0-89791-800-2.
  20. ^ Kogan, Alex; Petrank, Erez (2012). A method for creating fast wait-free data structures. Proc. 17th ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPOPP). pp. 141–150. doi:10.1145/2145816.2145835. ISBN 978-1-4503-1160-1.
  21. ^ Timnat, Shahar; Petrank, Erez (2014). A Practical Wait-Free Simulation for Lock-Free Data Structures. Proc. 17th ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPOPP). pp. 357–368. doi:10.1145/2692916.2555261. ISBN 978-1-4503-2656-8.
  22. ^ Alistarh, Dan; Censor-Hillel, Keren; Shavit, Nir (2014). Are Lock-Free Concurrent Algorithms Practically Wait-Free?. Proc. 46th Annual ACM Symposium on Theory of Computing (STOC’14). pp. 714–723. arXiv:1311.3200. doi:10.1145/2591796.2591836. ISBN 978-1-4503-2710-7.
[edit]
激酶是什么 女用避孕套是什么样的 热结旁流是什么意思 大排是什么肉 520是什么节日
拉泡沫稀便什么原因 双肺纹理粗重什么意思 拘禁是什么意思 river是什么意思 烫伤涂什么药膏
私联是什么意思 一般的意思是什么 什么品牌镜片好 123是什么意思 右加一笔是什么字
梦见女尸是什么预兆 2018年属什么 六月二十四是什么日子 水仙茶适合什么人喝 铁树开花是什么生肖
纯粹的人是什么性格hcv8jop5ns9r.cn 降压药什么时候吃好aiwuzhiyu.com 芹菜和西芹有什么区别hcv9jop7ns5r.cn pgi是什么意思hcv8jop9ns0r.cn 伤风流鼻涕吃什么药好clwhiglsz.com
怕痒的男人意味着什么hcv9jop3ns0r.cn 男朋友发烧该说些什么hcv7jop4ns5r.cn 为什么耳鸣一直不停xianpinbao.com 4月26日是什么星座hcv7jop6ns0r.cn 为什么生化妊娠是好事hcv9jop7ns5r.cn
血糖高适合喝什么酒hcv8jop5ns7r.cn 囊性灶什么意思严重吗hcv9jop4ns1r.cn p和t分别是什么意思hcv8jop7ns2r.cn 吃什么补气血最快最好hcv9jop4ns4r.cn 头眩晕是什么原因引起的kuyehao.com
尿血是什么病的征兆sanhestory.com 什么是杀青hcv8jop0ns3r.cn 荷花的寓意是什么hcv9jop5ns3r.cn 梦见摘瓜是什么意思啊hcv8jop1ns4r.cn 头晕头疼是什么原因sscsqa.com
百度