血象高是什么意思| 山丘是什么意思| 凝胶是什么东西| px是什么意思| 糖尿病可以吃什么零食| 什么叫变态| 养狗养不活是什么兆头| 阴道炎是什么引起的| 兰花什么时候开花| 真实写照的意思是什么| 拔完智齿后需要注意什么| 什么牌子洗衣机好| 二杠四星是什么军衔| 风寒是什么意思| 男孩流鼻血是什么原因| 月经期后是什么期| 糖化血红蛋白高是什么原因| 澳门用什么币种| 槐树什么时候开花| 长期喝蜂蜜水有什么好处| 子宫增大是什么原因| 教师资格证有什么用| 排便困难是什么原因| 血糖高检查什么项目| 欧尼是什么意思| 什么时间段买机票最便宜| 肺部ct能查出什么病| 牙龈紫色是什么原因| xxoo什么意思| boxing是什么意思| 300年前是什么朝代| 太阳指什么生肖| 真露酒属于什么酒| 蹄花是什么| 六月十四号是什么星座| 驾驶证和行驶证有什么区别| 旗袍配什么鞋| 肠绞痛什么原因引起的| 为什么伤口愈合会痒| 足赤是什么意思| 长智齿是什么原因引起的| 1985年属牛是什么命| 粉碎性骨折是什么意思| 手容易出汗是什么原因| 好老公的标准是什么| 梦见被蛇咬是什么意思| 日常是什么意思| 减肥最好的办法是什么| 鬼压床是什么| 从未是什么意思| 扫把星什么意思| 晚上看见刺猬预示什么| 耳朵烫是什么预兆| 脾胃伏火是什么意思| 天龙八部是指佛教中的什么| 香灰不落预示着什么| 正月十八是什么星座| 游戏hp是什么意思| 滑膜炎是什么原因引起的| 低血压吃什么水果| 狗有眼屎是什么原因| 画眉鸟吃什么| 什么什么害命| 什么是hpv病毒| 梅花什么颜色| 藿香正气水有什么用| 婴儿为什么喜欢趴着睡| 吴亦凡什么星座| 吃西红柿有什么好处和坏处| 七月二十四是什么星座| 北是什么生肖| 什么叫雷达| 马齿苋有什么功效| 突然便秘是什么原因引起的| 参考是什么意思| 少叙痣是什么意思| 中耳炎用什么药最好| 糖尿病早餐吃什么好| creative是什么意思| 三个为什么| 粤语什么怎么说| 灰指甲不治疗有什么后果| 两女 一杯是什么| 社会科学院是干什么的| 农历五月属什么生肖| 脾大是什么原因造成的怎么治疗| 十月五号是什么星座| 眼睑红是什么原因| 河虾最爱吃什么食物| 对什么有好处| 胰岛素是什么器官分泌的| 毕业典礼送什么花| 花木兰代表什么生肖| 奶奶过生日送什么礼物| 陈皮的功效与作用主要治什么病| 自欺欺人什么意思| 腰眼疼是什么原因引起的| 鼻窦炎都有什么症状| 女人细菌感染什么原因引起的| 肾炎是什么原因引起的| 床褥是什么| 变白吃什么| 寂静的意思是什么| sop是什么意思| 县武装部长是什么级别| 什么病需要透析| 有始无终是什么生肖| 出汗有异味是什么原因| 子宫切除有什么影响| 小孩肚子痛挂什么科| 方向盘重是什么原因| 误喝碘伏有什么伤害吗| 学护理需要什么条件| 解脲支原体阳性是什么病| 梦见水是什么征兆| cool什么意思中文| 吕布是什么生肖| 月经量少吃什么调理最好方法| 为什么会得阑尾炎| 黑色的玫瑰花代表什么| 为什么小脑会萎缩| 刘玄德属什么生肖| landrover是什么车| 小便分叉是什么症状| 什么人不能吃黄精| 菜园中有什么生肖| 胎儿永久性右脐静脉是什么意思| 清关是什么| 狗吃什么药会立马就死| 阑尾炎什么症状表现| 沵是什么意思| 耳石症是什么症状| 早孕反应最早什么时候出现| 吃的少还胖什么原因| 焦虑症是什么| 水落石出是什么意思| 美国为什么要打伊朗| 附属是什么意思| 有机可乘是什么意思| 为什么叫拉丁美洲| 埋线有什么好处和坏处| 吃什么水果败火| 龋牙是什么意思| 晚上右眼跳是什么预兆| 得五行属什么| 跑完步头疼是为什么| 全腹部ct平扫主要检查什么| 绍兴有什么大学| 什么是沉香| 身旺是什么意思| 喝什么养肝护肝| 口腔溃疡补充什么维生素| 巴黎世家是什么档次| 咳绿痰是什么原因| 怼人是什么意思| 害怕什么| 三季人是什么意思| 肺阳虚吃什么中成药| 什么叫有机蔬菜| 太阳代表什么数字| rog是什么牌子| 蓝莓什么季节成熟| kids是什么品牌| 拉肚子想吐是什么原因| 度化是什么意思| 吃什么能长胖| 献血有什么好处| 愚公移山是什么意思| 炒菜勾芡用什么淀粉| 肺结节不能吃什么| 太平猴魁属于什么茶类| 输血前四项检查是什么| 封闭抗体是什么意思| 儿化音是什么意思| 牛和什么生肖最配| 什么| 什么的滋味| 吃完桃子不能吃什么| 恩五行属什么| 检查全身挂什么科| 迷你什么意思| 皮肤长小肉粒是什么原因| 一诺千金是什么生肖| 怀孕做糖耐是检查什么| 产物是什么意思| 3a是什么| 用什么回奶最快最有效| 胜造七级浮屠是什么意思| 做梦梦见剪头发是什么意思| 穿什么衣服好看| 槐子泡水喝有什么功效| 勾心斗角什么意思| 什么然有序| 瑞士移民需要什么条件| 梦见捡了好多钱是什么预兆| 水的ph值是什么意思| 为什么会突然吐血| 脍炙人口是什么意思| 去香港自由行要办什么手续| 2月25日什么星座| 更年期什么意思| 宝宝热疹用什么药膏| 血糖高可以吃什么水果| 打喷嚏流清鼻涕属于什么感冒| 三月十九是什么星座| 西洋参可以和什么一起泡水喝| 牙齿上有黑点是什么原因| 29岁属什么| 月经黑褐色是什么原因| 浮现是什么意思| 震仰盂什么意思| 什么时候初伏第一天| 失眠多梦吃什么药| 手指上的月牙代表什么| 胎儿停止发育是什么原因造成的| 如花是什么意思| 茄子有什么功效和作用| 六月份什么星座| 睡眠不好是什么原因引起的| 单绒双羊是什么意思| 心脏疼是什么感觉| 腔梗吃什么药| 梦见自己的衣服丢了是什么意思| 拔牙后吃什么食物最好| 口嫌体正直是什么意思| 紫外线过敏吃什么药| 什么叫变应性鼻炎| 徐州菜属于什么菜系| fossil是什么牌子| 商纣王叫什么名字| 吃什么能降血压最有效| 胎盘老化对胎儿有什么影响| 自五行属什么| 专升本需要考什么| 腰间盘突出是什么原因引起的| wbc是什么| 梦见放生鱼是什么意思| 孩子急性肠胃炎吃什么药| 紫色睡莲的花语是什么| 什么牌子的保温杯好| 什么情况要做支气管镜| 是非是什么意思| 腿抽筋是什么问题| 干燥综合征挂什么科| 什么是扁平足| 黄芪精适合什么人喝| 女性性冷淡是什么原因| 包涵是什么意思| 胸为什么会胀痛| 手指上长毛是什么原因| 小脑萎缩吃什么药效果最好| 什么是密度| 什么时候量血压最准| 什么时候做nt| 肋软骨炎挂什么科| 红细胞压积是什么意思| 双的反义词是什么| 青口是什么东西| 香槟玫瑰花语是什么意思| 空腹喝牛奶为什么会拉肚子| 蓝痣有没有什么危害| 酒酿蛋什么时候吃效果最好| 疙瘩疤痕有什么方法可以去除| 黄龙玉产地在什么地方| 百度Jump to content

舞蹈 | 肥东县首届少儿舞蹈大赛少儿组海选收官

From Wikipedia, the free encyclopedia
Single instruction, multiple data
百度 丁健关注前沿技术,包括人工智能、大数据、云计算,另外就是企业服务。

Single instruction, multiple data (SIMD) is a type of parallel computing (processing) in Flynn's taxonomy. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA.

Such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is especially applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern central processing unit (CPU) designs include SIMD instructions to improve the performance of multimedia use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.

Confusion between SIMT and SIMD

[edit]
ILLIAC IV Array overview, from ARPA-funded Introductory description by Steward Denenberg, July 15 1971.[2]

SIMD has three different subcategories in Flynn's 1972 Taxonomy, one of which is single instruction, multiple threads (SIMT). SIMT should not be confused with software threads or hardware threads, both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution, such as in the ILLIAC IV.

SIMD should not be confused with Vector processing, characterized by the Cray 1 and clarified in Duncan's taxonomy. The difference between SIMD and vector processors is primarily the presence of a Cray-style SET VECTOR LENGTH instruction.

One key distinction between SIMT and SIMD is that the SIMD unit will not have its own memory. Another key distinction in SIMT is the presence of control flow mechanisms like warps (Nvidia terminology) or wavefronts (Advanced Micro Devices (AMD) terminology). ILLIAC IV simply called them "Control Signals". These signals ensure that each Processing Element in the entire parallel array is synchronized in its simultaneous execution of the (one, current) broadcast instruction.

Each hardware element (PU, or PE in ILLIAC IV terminology) working on individual data item sometimes also referred to as a SIMD lane or channel, although the ILLIAC IV PE was a scalar 64-bit unit. Modern graphics processing units (GPUs) are invariably wide SIMD within a register (SWAR) and typically have more that 16 data lanes or channels of such Processing Elements.[citation needed] Some newer GPUs integrate mixed-precision [citation needed] SWAR pipelines, which performs concurrent sub-word 8-bit, 16-bit, and 32-bit operations. This is critical for applications like AI inference, where mixed precision boosts throughput.

History

[edit]

The first known operational use to date of SIMD within a register was the TX-2, in 1958. It was capable of 36-bit operations and two 18-bit or four 9-bit sub-word operations.

The first commercial use of SIMD instructions was in the ILLIAC IV, which was completed in 1972. This included 64 (of an original design of 256) processors that had local memory to hold different values while performing the same instruction. Separate hardware quickly sent out the values to be processed and gathered up the results.

Vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: Duncan's Taxonomy includes them whereas Flynn's Taxonomy does not, due to Flynn's work (1966, 1972) pre-dating the Cray-1 (1977). The complexity of Vector processors however inspired a simpler arrangement known as SIMD within a register.

The first era of modern SIMD computers was characterized by massively parallel processing-style supercomputers such as the Thinking Machines Connection Machine CM-1 and CM-2. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar multiple instruction, multiple data (MIMD) approaches based on commodity processors such as the Intel i860 XP became more powerful, and interest in SIMD waned.[3]

The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this type of computing power, and microprocessor vendors turned to SIMD to meet the demand.[4] This resurgence also coincided with the rise of DirectX and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introduced Multimedia Acceleration eXtensions (MAX) instructions into PA-RISC 1.1 desktops in 1994 to accelerate MPEG decoding.[5] Sun Microsystems introduced SIMD integer instructions in its "VIS" instruction set extensions in 1995, in its UltraSPARC I microprocessor. MIPS followed suit with their similar MDMX system.

The first widely deployed desktop SIMD was with Intel's MMX extensions to the x86 architecture in 1996. This sparked the introduction of the much more powerful AltiVec system in the Motorola PowerPC and IBM's POWER systems. Intel responded in 1999 by introducing the all-new SSE system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX, AVX2 and AVX-512 are developed by Intel. AMD supports AVX, AVX2, and AVX-512 in their current products.[6]

All of these developments have been oriented toward support for real-time graphics, and are therefore oriented toward processing in two, three, or four dimensions, usually with vector lengths of between two and sixteen words, depending on data type and architecture. When new SIMD architectures need to be distinguished from older ones, the newer architectures are then considered "short-vector" architectures, as earlier SIMD and vector supercomputers had vector lengths from 64 to 64,000. A modern supercomputer is almost always a cluster of MIMD computers, each of which implements (short-vector) SIMD instructions.

Advantages

[edit]

An application that may take advantage of SIMD is one where the same value is being added to (or subtracted from) a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red (R), green (G) and blue (B) portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to (or subtracted from) them, and the resulting values are written back out to memory. Audio digital signal processors (DSPs) would likewise, for volume control, multiply both Left and Right channels simultaneously.

With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "retrieve this pixel, now retrieve the next pixel", a SIMD processor will have a single instruction that effectively says "retrieve n pixels" (where n is a number that varies from design to design). For a variety of reasons, this can take much less time than retrieving each pixel individually, as with a traditional CPU design. Moreover, SIMD instructions can exploit data reuse, where the same operand is used across multiple calculations, via broadcasting features. For example, multiplying several pixels by a constant scalar value can be done more efficiently by loading the scalar once and broadcasting it across a SIMD register.

Another advantage is that the instruction operates on all loaded data in a single operation. In other words, if the SIMD system works by loading up eight data points at once, the add operation being applied to the data will happen to all eight values at the same time. This parallelism is separate from the parallelism provided by a superscalar processor; the eight values are processed in parallel even on a non-superscalar processor, and a superscalar processor may be able to perform multiple SIMD operations in parallel.

Disadvantages

[edit]
  • Not all algorithms can be vectorized easily. For example, a flow-control-heavy task like code parsing may not easily benefit from SIMD; however, it is theoretically possible to vectorize comparisons and "batch flow" to target maximal cache optimality, though this technique will require more intermediate state. Note: Batch-pipeline systems (example: GPUs or software rasterization pipelines) are most advantageous for cache control when implemented with SIMD intrinsics, but they are not exclusive to SIMD features. Further complexity may be apparent to avoid dependence within series such as code strings; while independence is required for vectorization.[clarification needed] Additionally, divergent control flow—where different data lanes would follow different execution paths—can lead to underutilization of SIMD hardware. To handle such divergence, techniques like masking and predication are often employed, but they introduce performance overhead and complexity.
  • Large register files which increases power consumption and required chip area.
  • Currently, implementing an algorithm with SIMD instructions usually requires human labor; most compilers do not generate SIMD instructions from a typical C program, for instance. Automatic vectorization in compilers is an active area of computer science research. (Compare Vector processor.)
  • Programming with given SIMD instruction sets can involve many low-level challenges.
    1. SIMD may have restrictions on data alignment; programmers familiar with a given architecture may not expect this. Worse: the alignment may change from one revision or "compatible" processor to another.
    2. Gathering data into SIMD registers and scattering it to the correct destination locations is tricky (sometimes requiring permute instructions (operations) and can be inefficient.
    3. Specific instructions like rotations or three-operand addition are not available in some SIMD instruction sets.
    4. Instruction sets are architecture-specific: some processors lack SIMD instructions entirely, so programmers must provide non-vectorized implementations (or different vectorized implementations) for them.
    5. Different architectures provide different register sizes (e.g. 64, 128, 256 and 512 bits) and instruction sets, meaning that programmers must provide multiple implementations of vectorized code to operate optimally on any given CPU. In addition, the possible set of SIMD instructions grows with each new register size. Unfortunately, for legacy support reasons, the older versions cannot be retired.
    6. The early MMX instruction set shared a register file with the floating-point stack, which caused inefficiencies when mixing floating-point and MMX code. However, SSE2 corrects this.

To remedy problems 1 and 5, Cray-style Vector processors use an alternative approach: instead of exposing the sub-register-level details directly to the programmer, the instruction set abstracts out at least the length (number of elements) into a runtime control register, usually named "VL" (Vector Length). The hardware then handles all alignment issues and "strip-mining" of loops. Machines with different vector sizes would be able to run the same code. LLVM calls this vector type "vscale".[citation needed]

With SIMD, an order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitude or greater effectiveness (work done per instruction) is achievable with Vector ISAs.[7]

ARM's Scalable Vector Extension takes another approach, known in Flynn's Taxonomy more commonly known today as "Predicated" (masked) SIMD. This approach is not as compact as vector processing but is still far better than non-predicated SIMD. Detailed comparative examples are given at Vector processor § Vector instruction example. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.[8]

Chronology

[edit]
SIMD supercomputer examples excluding vector processors
Year Example
1974 ILLIAC IV - an Array Processor comprising scalar 64-bit PEs
1974 ICL Distributed Array Processor (DAP)
1976 Burroughs Scientific Processor
1981 Geometric-Arithmetic Parallel Processor from Martin Marietta (continued at Lockheed Martin, then at Teranex and Silicon Optix)
1983–1991 Massively Parallel Processor (MPP), from NASA/Goddard Space Flight Center
1985 Connection Machine, models 1 and 2 (CM-1 and CM-2), from Thinking Machines Corporation
1987–1996 MasPar MP-1 and MP-2
1991 Zephyr DC from Wavetracer
2001 Xplor from Pyxsys, Inc.

Hardware

[edit]

Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for Alpha. SIMD instructions can be found, to one degree or another, on most CPUs, including IBM's AltiVec and Signal Processing Engine (SPE) for PowerPC, Hewlett-Packard's (HP) PA-RISC Multimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt, Streaming SIMD Extensions (SSE), SSE2, SSE3 SSSE3 and SSE4.x, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS and VIS2, Sun's MAJC, ARM's Neon technology, MIPS' MDMX (MaDMaX) and MIPS-3D. The IBM, Sony, Toshiba co-developed Cell processor's Synergistic Processing Element's (SPE's) instruction set is heavily SIMD based. Philips, now NXP, developed several SIMD processors named Xetal. The Xetal has 320 16-bit processor elements especially designed for vision tasks. Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently.

Intel's AVX-512 SIMD instructions process 512 bits of data at once.

Software

[edit]
The ordinary tripling of four 8-bit numbers. The CPU loads one 8-bit number into R1, multiplies it with R2, and then saves the answer from R3 back to RAM. This process is repeated for each number.
The SIMD tripling of four 8-bit numbers. The CPU loads 4 numbers at once, multiplies them all in one SIMD-multiplication, and saves them all at once back to RAM. In theory, the speed can be multiplied by 4.

SIMD instructions are widely used to process 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them especially useful for data processing and compression. They are also used in cryptography.[9][10][11] The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future. Recent compilers such as LLVM, GNU Compiler Collection (GCC), and Intel's ICC offer aggressive auto-vectoring options. Developers can often enable these with flags like -O3 or -ftree-vectorize, which guide the compiler to restructure loops for SIMD compatibility.

Adoption of SIMD systems in personal computer software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like MMX and 3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the FPU and MMX registers. Compilers also often lacked support, requiring programmers to resort to assembly language coding.

SIMD on x86 had a slow start. The introduction of 3DNow! by AMD and SSE by Intel confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives like libSIMD, SIMDx86 and SLEEF have started to appear (see also libm).[12]

Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest. AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from Motorola, IBM and GNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example iTunes and QuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple's APIs and development tools (XCode) were modified to support SSE2 and SSE3 as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and Freescale Semiconductor. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC and Power ISA designs from Freescale and IBM.

SIMD within a register, or SWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that does not provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly.

Programmer interface

[edit]

It is common for publishers of the SIMD instruction sets to make their own C and C++ language extensions with intrinsic functions or special datatypes (with operator overloading) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)

The GNU C Compiler takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.[13] The LLVM Clang compiler also implements the feature, with an analogous interface defined in the IR.[14] Rust's packed_simd crate (and the experimental std::simd) uses this interface, and so does Swift 2.0+.

C++ has an experimental interface std::experimental::simd that works similarly to the GCC extension. LLVM's libcxx seems to implement it.[citation needed] For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.[15]

Microsoft added SIMD to .NET in RyuJIT.[16] The System.Numerics.Vector package, available on NuGet, implements SIMD datatypes.[17] Java also has a new proposed API for SIMD instructions available in OpenJDK 17 in an incubator module.[18] It also has a safe fallback mechanism on unsupported CPUs to simple loops.

Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use. OpenMP 4.0+ has a #pragma omp simd hint.[19] This OpenMP interface has replaced a wide set of nonstandard extensions, including Cilk's #pragma simd,[20] GCC's #pragma GCC ivdep, and many more.[21]

SIMD multi-versioning

[edit]

Consumer software is typically expected to work on a range of CPUs covering multiple generations, which could limit the programmer's ability to use new SIMD instructions to improve the computational performance of a program. The solution is to include multiple versions of the same code that uses either older or newer SIMD technologies, and pick one that best fits the user's CPU at run-time (dynamic dispatch). There are two main camps of solutions:

  • Function multi-versioning (FMV): a subroutine in the program or a library is duplicated and compiled for many instruction set extensions, and the program decides which one to use at run-time.
  • Library multi-versioning (LMV): the entire programming library is duplicated for many instruction set extensions, and the operating system or the program decides which one to load at run-time.

FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo. Intel C++ Compiler, GNU Compiler Collection since GCC 6, and Clang since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicit target_clones labels in the code to "clone" functions,[22] while ICC does so automatically (under the command-line option /Qax). The Rust programming language also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.[23]

As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed. Glibc supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.[24]

SIMD on the web

[edit]

In 2013 John McCutchan announced that he had created a high-performance interface to SIMD instruction sets for the Dart programming language, bringing the benefits of SIMD to web programs for the first time. The interface consists of two types:[25]

  • Float32x4, 4 single precision floating point values.
  • Int32x4, 4 32-bit integer values.

Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for 4×4 matrix multiplication, 3D vertex transformation, and Mandelbrot set visualization show near 400% speedup compared to scalar code written in Dart.

Intel announced at IDF 2013 that they were implementing McCutchan's specification for both V8 and SpiderMonkey.[26] However, by 2017, SIMD.js was taken out of the ECMAScript standard queue in favor of pursuing a similar interface in WebAssembly.[27] Support for SIMD was added to the WebAssembly 2.0 specification, which was finished on 2022 and became official on December 2024.[28] LLVM's autovectoring, when compiling C or C++ to WebAssembly, can target WebAssembly SIMD to automatically make use of SIMD, while SIMD intrinsic are also available.[29]

Commercial applications

[edit]

It has generally proven difficult to find sustainable commercial applications for SIMD-only processors.

One that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications like conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from high-definition television (HDTV) formats, etc.), deinterlacing, image noise reduction, adaptive video compression, and image enhancement.

A more ubiquitous application for SIMD is found in video games: nearly every modern video game console since 1998 has incorporated a SIMD processor somewhere in its architecture. The PlayStation 2 was unusual in that one of its vector-float units could function as an autonomous digital signal processor (DSP) executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.

A later processor that used vector processing is the Cell processor used in the Playstation 3, which was developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (a non-uniform memory access (NUMA) architecture, each with independent local store and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.

Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.[30]

Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. ClearSpeed's CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect Bill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.

See also

[edit]

References

[edit]
  1. ^ Flynn, Michael J. (September 1972). "Some Computer Organizations and Their Effectiveness" (PDF). IEEE Transactions on Computers. C-21 (9): 948–960. doi:10.1109/TC.1972.5009071.
  2. ^ "Archived copy" (PDF). Archived from the original (PDF) on 2025-08-05.{{cite web}}: CS1 maint: archived copy as title (link)
  3. ^ "MIMD1 - XP/S, CM-5" (PDF).
  4. ^ Conte, G.; Tommesani, S.; Zanichelli, F. (2000). "The long and winding road to high-performance image processing with MMX/SSE". Proc. Fifth IEEE Int'l Workshop on Computer Architectures for Machine Perception. doi:10.1109/CAMP.2000.875989. hdl:11381/2297671. S2CID 13180531.
  5. ^ Lee, R.B. (1995). "Realtime MPEG video via software decompression on a PA-RISC processor". digest of papers Compcon '95. Technologies for the Information Superhighway. pp. 186–192. doi:10.1109/CMPCON.1995.512384. ISBN 0-8186-7029-0. S2CID 2262046.
  6. ^ "AMD Zen 4 AVX-512 Performance Analysis On The Ryzen 9 7950X Review". www.phoronix.com. Retrieved 2025-08-05.
  7. ^ Patterson, David; Waterman, Andrew (18 September 2017). "SIMD Instructions Considered Harmful". SIGARCH.
  8. ^ "ARM LDR/STR, LDM/STM instructions - Programmer All". programmerall.com. Retrieved 2025-08-05.
  9. ^ RE: SSE2 speed, showing how SSE2 is used to implement SHA hash algorithms
  10. ^ Salsa20 speed; Salsa20 software, showing a stream cipher implemented using SSE2
  11. ^ Subject: up to 1.4x RSA throughput using SSE2, showing RSA implemented using a non-SIMD SSE2 integer multiply instruction.
  12. ^ "SIMD library math functions". Stack Overflow. Retrieved 16 January 2020.
  13. ^ "Vector Extensions". Using the GNU Compiler Collection (GCC). Retrieved 16 January 2020.
  14. ^ "Clang Language Extensions". Clang 11 documentation. Retrieved 16 January 2020.
  15. ^ "VcDevel/std-simd". VcDevel. 6 August 2020.
  16. ^ "RyuJIT: The next-generation JIT compiler for .NET". 30 September 2013.
  17. ^ "The JIT finally proposed. JIT and SIMD are getting married". 7 April 2014.
  18. ^ "JEP 338: Vector API".
  19. ^ "SIMD Directives". www.openmp.org.
  20. ^ "Tutorial pragma simd". CilkPlus. 18 July 2012. Archived from the original on 4 December 2020. Retrieved 9 August 2020.
  21. ^ Kruse, Michael. "OMP5.1: Loop Transformations" (PDF).
  22. ^ "Function multi-versioning in GCC 6". lwn.net. 22 June 2016.
  23. ^ "2045-target-feature". The Rust RFC Book.
  24. ^ "Transparent use of library packages optimized for Intel? architecture". Clear Linux* Project. Retrieved 8 September 2019.
  25. ^ John McCutchan. "Bringing SIMD to the web via Dart" (PDF). Archived from the original (PDF) on 2025-08-05.
  26. ^ "SIMD in JavaScript". 01.org. 8 May 2014.
  27. ^ "tc39/ecmascript_simd: SIMD numeric type for EcmaScript". GitHub. Ecma TC39. 22 August 2019. Retrieved 8 September 2019.
  28. ^ "Wasm 2.0 Completed - WebAssembly".
  29. ^ "Using SIMD with WebAssembly". Emscripten 4.0.11-git (dev) documentation.
  30. ^ "ZiiLABS ZMS-05 ARM 9 Media Processor". ZiiLabs. Archived from the original on 2025-08-05. Retrieved 2025-08-05.
[edit]
与什么有关 家贼是什么生肖 中二病的意思是什么 合肥原名叫什么名字 狗吃什么蔬菜好
gmp是什么意思 卡其色裙子配什么颜色上衣好看 天下无双是什么生肖 什么人不适合喝咖啡 什么虫咬了起水泡
66岁生日有什么讲究 肺炎支原体感染吃什么药 咳嗽是什么原因 12年是什么年 炖汤用什么鸡
观音菩萨保佑什么 肚子疼发烧是什么病症 翻来覆去的覆什么意思 顺流而下什么意思 出汗多是什么原因
胚发育成什么hcv9jop3ns0r.cn 为什么会一直咳嗽zhiyanzhang.com 榴莲不可以和什么食物一起吃hcv9jop8ns0r.cn 婴儿半夜哭闹是什么原因hcv9jop0ns0r.cn 九四年属什么生肖jasonfriends.com
吐露是什么意思beikeqingting.com 荆芥的别名叫什么hcv9jop3ns6r.cn 卵泡长得慢是什么原因造成的hcv9jop1ns2r.cn 喝水不排尿是什么原因inbungee.com 岔气吃什么药hcv8jop7ns4r.cn
月经快来了有什么征兆hcv8jop5ns5r.cn 肝素是什么hcv8jop4ns0r.cn 四月二十四是什么星座hcv9jop5ns4r.cn 低密度脂蛋白胆固醇偏低是什么意思hcv8jop2ns7r.cn 肾衰竭吃什么好liaochangning.com
轻奢什么意思hcv8jop5ns1r.cn 2月是什么月hcv9jop6ns6r.cn 梦见别人受伤流血是什么预兆hcv9jop5ns6r.cn 什么药治便秘最好最快hcv9jop7ns1r.cn 粟是什么农作物hcv9jop3ns6r.cn
百度